Turn a \$10–\$35 Raspberry Pi into a persistent AI assistant that listens for events, calls cloud or local LLMs, and sends notifications.
- Optimised for low‑RAM edge devices
- Systemd service + health checks for reliability
- Example workflows for home automation and monitoring
Run PicoClaw as a self‑hosted AI agent on a VPS, NUC, or homelab server with cron‑driven jobs, webhooks, and local‑only data flow.
- Use Docker or bare‑metal Go binaries
- Connect to OpenAI, Groq, or local models
- Build automations without handing data to SaaS bots
Wire Home Assistant automations into PicoClaw so a lightweight AI assistant can react to smart‑home events and send intelligent responses.
- Use webhooks to trigger PicoClaw flows
- Summarise energy and security events with LLMs
- Run everything on a Raspberry Pi or small server
Run PicoClaw in Docker on your homelab server or NAS to centralise AI automations next to your other self‑hosted services.
- Deploy PicoClaw as a lightweight container
- Integrate with monitoring, chat, and dashboards
- Keep prompts and logs inside your own lab
Create AI‑powered cron jobs that call PicoClaw on a schedule to turn raw logs and metrics into readable summaries and alerts.
- Daily or hourly AI summaries of logs
- Scheduled status reports for teams
- Simple, low‑overhead automation pattern
Run PicoClaw on Windows using WSL2. Install in Linux, configure providers, and run your lightweight AI assistant from the terminal.
- Fast startup for local development
- CLI and gateway options
- Works with Raspberry Pi style setups
Set up a Telegram AI assistant with PicoClaw using picoclaw gateway and your LLM provider keys.
- Gateway-based chat integration
- Lightweight, always-on assistant
- Perfect for small automations
Run a Discord AI bot with PicoClaw. Connect Discord messages to lightweight AI responses through the gateway.
- Configure gateway and providers
- Instant chat responses
- Add scheduled updates with Heartbeat
Use PicoClaw with on-device or local models by connecting it to Ollama. Keep prompts and logs in your environment.
- Privacy-first local inference
- Self-hosted assistant workflows
- Works well for Raspberry Pi and homelabs
Tune PicoClaw on Raspberry Pi for smoother long-running assistant behavior. Reduce CPU spikes and optimize workloads.
- Service patterns and resource control
- Smarter scheduling with Heartbeat
- Lower load with model + prompt choices
Deploy PicoClaw on RISC-V hardware. Choose the correct release, configure providers, and run your lightweight AI assistant.
- Runs on open RISC-V platforms
- CLI and gateway assistant options
- Ideas for edge and on-prem deployments
Pick the right LLM provider for a lightweight AI assistant. Compare OpenRouter and OpenAI for latency, cost, and model variety.
- Choose based on assistant workflow
- Compare latency and cost trade-offs
- Set up via configuration
Deploy PicoClaw as an always-on assistant using Docker Compose with config volumes and reliable restarts.
- Simple `compose.yml` setup
- Persistent config and logs
- Works with homelab dashboards
Use Groq as your PicoClaw provider for low-latency assistants, bots, and cron jobs—especially on small VPS or Raspberry Pi front-ends.
- Fast inference for interactive chat flows
- Practical free tier for experiments
- Works with gateway and agent patterns
Route PicoClaw to DeepSeek when you want strong models with aggressive pricing for high-volume summaries and alerts.
- Cost-aware cloud backend
- Same lightweight orchestration on the host
- Pair with cron or Heartbeat for schedules
Point PicoClaw at LM Studio, vLLM, LiteLLM, or any OpenAI-shaped HTTP API for self-hosted models and internal gateways.
- LAN or homelab model servers
- Swap backends without rewriting automations
- Complements the Ollama-focused guide
Trigger PicoClaw from n8n workflows for self-hosted AI steps next to hundreds of SaaS connectors.
- HTTP nodes call your assistant or API
- Visual debugging for complex flows
- Keep secrets in n8n credentials
Run PicoClaw under systemd for reboot-safe, production-style assistants on servers and Raspberry Pi OS.
- Unit file patterns and journalctl
- Restart policies and hardening tips
- Fits gateway and long-running modes
Deploy PicoClaw on a cheap headless VPS: SSH, systemd, webhooks, and cron without a heavy Node or Python stack.
- Small RAM instances stay viable
- Always-on automation in the cloud
- TLS and firewall guidance pointers
Use Gemini from AI Studio as your PicoClaw provider for multimodal models while keeping the agent lightweight on-device.
- Official
gemini provider slot
- Works with gateway and scheduled jobs
- Mix with other clouds for A/B tests
Run Claude-backed assistants through PicoClaw on Pi, VPS, or Docker—quality from Anthropic, tiny footprint locally.
- Configure
anthropic in providers
- Watch token use on long prompts
- Pair with Telegram or Discord bots
Direct OpenAI integration for GPT models when you want the official API and billing without a heavy local stack.
- Simple provider setup in config
- Compare with OpenRouter when needed
- Ideal for cron summaries and chat bots
Terminate TLS with Let’s Encrypt and proxy to PicoClaw for safe webhooks and public gateways.
- Production-style TLS termination
- Rate limits and path rules
- Fits VPS and homelab setups
Expose PicoClaw through cloudflared when you cannot open inbound ports—CGNAT-friendly homelabs.
- Outbound-only connection to Cloudflare
- Combine with Access / WAF
- Alternative to raw port forwarding
Reach PicoClaw only on your tailnet: stable DNS between Pi, NAS, laptop, and GPU box.
- Private mesh, no public HTTP required
- Split control-plane vs public webhooks
- Great with local Ollama endpoints
Extra CPU and I/O headroom for gateway + automation while PicoClaw still uses minimal RAM.
- 64-bit Pi OS + ARM64 binary
- Cloud or on-LAN models
- Builds on the general Pi guide
Trigger LLM workflows from CI: push, schedule, or manual—thin runners posting to your PicoClaw API.
- Secrets for auth, not raw LLM keys in YAML
- Needs reachable URL or self-hosted runner
- Complements n8n and cron patterns