Guides: Lightweight AI assistants for Raspberry Pi, Linux, and bots

Step-by-step guides for running PicoClaw as a lightweight AI assistant on Raspberry Pi (including Pi 5) and Linux, plus Windows WSL2, Telegram and Discord bots, local LLMs (Ollama), Docker and Docker Compose, VPS and systemd, nginx HTTPS, Cloudflare Tunnel, Tailscale, GitHub Actions, n8n webhooks, and major LLM APIs (OpenAI GPT, Anthropic Claude, Google Gemini, Groq, DeepSeek, OpenRouter, OpenAI-compatible gateways). For long-form essays on edge AI, ops, and LLM strategy, see the PicoClaw blog.

Run a Raspberry Pi AI assistant

Turn a \$10–\$35 Raspberry Pi into a persistent AI assistant that listens for events, calls cloud or local LLMs, and sends notifications.

  • Optimised for low‑RAM edge devices
  • Systemd service + health checks for reliability
  • Example workflows for home automation and monitoring

Self‑hosted AI assistant on any Linux box

Run PicoClaw as a self‑hosted AI agent on a VPS, NUC, or homelab server with cron‑driven jobs, webhooks, and local‑only data flow.

  • Use Docker or bare‑metal Go binaries
  • Connect to OpenAI, Groq, or local models
  • Build automations without handing data to SaaS bots

Home Assistant AI automations

Wire Home Assistant automations into PicoClaw so a lightweight AI assistant can react to smart‑home events and send intelligent responses.

  • Use webhooks to trigger PicoClaw flows
  • Summarise energy and security events with LLMs
  • Run everything on a Raspberry Pi or small server

Docker homelab AI assistant

Run PicoClaw in Docker on your homelab server or NAS to centralise AI automations next to your other self‑hosted services.

  • Deploy PicoClaw as a lightweight container
  • Integrate with monitoring, chat, and dashboards
  • Keep prompts and logs inside your own lab

Linux cron AI jobs

Create AI‑powered cron jobs that call PicoClaw on a schedule to turn raw logs and metrics into readable summaries and alerts.

  • Daily or hourly AI summaries of logs
  • Scheduled status reports for teams
  • Simple, low‑overhead automation pattern

Windows WSL2 AI assistant

Run PicoClaw on Windows using WSL2. Install in Linux, configure providers, and run your lightweight AI assistant from the terminal.

  • Fast startup for local development
  • CLI and gateway options
  • Works with Raspberry Pi style setups

Telegram bot setup

Set up a Telegram AI assistant with PicoClaw using picoclaw gateway and your LLM provider keys.

  • Gateway-based chat integration
  • Lightweight, always-on assistant
  • Perfect for small automations

Discord bot setup

Run a Discord AI bot with PicoClaw. Connect Discord messages to lightweight AI responses through the gateway.

  • Configure gateway and providers
  • Instant chat responses
  • Add scheduled updates with Heartbeat

Local LLM with Ollama

Use PicoClaw with on-device or local models by connecting it to Ollama. Keep prompts and logs in your environment.

  • Privacy-first local inference
  • Self-hosted assistant workflows
  • Works well for Raspberry Pi and homelabs

Raspberry Pi performance tuning

Tune PicoClaw on Raspberry Pi for smoother long-running assistant behavior. Reduce CPU spikes and optimize workloads.

  • Service patterns and resource control
  • Smarter scheduling with Heartbeat
  • Lower load with model + prompt choices

RISC-V setup

Deploy PicoClaw on RISC-V hardware. Choose the correct release, configure providers, and run your lightweight AI assistant.

  • Runs on open RISC-V platforms
  • CLI and gateway assistant options
  • Ideas for edge and on-prem deployments

OpenRouter vs OpenAI

Pick the right LLM provider for a lightweight AI assistant. Compare OpenRouter and OpenAI for latency, cost, and model variety.

  • Choose based on assistant workflow
  • Compare latency and cost trade-offs
  • Set up via configuration

Docker Compose for PicoClaw

Deploy PicoClaw as an always-on assistant using Docker Compose with config volumes and reliable restarts.

  • Simple `compose.yml` setup
  • Persistent config and logs
  • Works with homelab dashboards

Groq for fast LLM calls

Use Groq as your PicoClaw provider for low-latency assistants, bots, and cron jobs—especially on small VPS or Raspberry Pi front-ends.

  • Fast inference for interactive chat flows
  • Practical free tier for experiments
  • Works with gateway and agent patterns

DeepSeek API on a budget

Route PicoClaw to DeepSeek when you want strong models with aggressive pricing for high-volume summaries and alerts.

  • Cost-aware cloud backend
  • Same lightweight orchestration on the host
  • Pair with cron or Heartbeat for schedules

OpenAI-compatible local APIs

Point PicoClaw at LM Studio, vLLM, LiteLLM, or any OpenAI-shaped HTTP API for self-hosted models and internal gateways.

  • LAN or homelab model servers
  • Swap backends without rewriting automations
  • Complements the Ollama-focused guide

n8n webhook automation

Trigger PicoClaw from n8n workflows for self-hosted AI steps next to hundreds of SaaS connectors.

  • HTTP nodes call your assistant or API
  • Visual debugging for complex flows
  • Keep secrets in n8n credentials

systemd service on Linux

Run PicoClaw under systemd for reboot-safe, production-style assistants on servers and Raspberry Pi OS.

  • Unit file patterns and journalctl
  • Restart policies and hardening tips
  • Fits gateway and long-running modes

VPS headless AI agent

Deploy PicoClaw on a cheap headless VPS: SSH, systemd, webhooks, and cron without a heavy Node or Python stack.

  • Small RAM instances stay viable
  • Always-on automation in the cloud
  • TLS and firewall guidance pointers

Google Gemini API

Use Gemini from AI Studio as your PicoClaw provider for multimodal models while keeping the agent lightweight on-device.

  • Official gemini provider slot
  • Works with gateway and scheduled jobs
  • Mix with other clouds for A/B tests

Anthropic Claude API

Run Claude-backed assistants through PicoClaw on Pi, VPS, or Docker—quality from Anthropic, tiny footprint locally.

  • Configure anthropic in providers
  • Watch token use on long prompts
  • Pair with Telegram or Discord bots

OpenAI GPT API

Direct OpenAI integration for GPT models when you want the official API and billing without a heavy local stack.

  • Simple provider setup in config
  • Compare with OpenRouter when needed
  • Ideal for cron summaries and chat bots

nginx HTTPS reverse proxy

Terminate TLS with Let’s Encrypt and proxy to PicoClaw for safe webhooks and public gateways.

  • Production-style TLS termination
  • Rate limits and path rules
  • Fits VPS and homelab setups

Cloudflare Tunnel

Expose PicoClaw through cloudflared when you cannot open inbound ports—CGNAT-friendly homelabs.

  • Outbound-only connection to Cloudflare
  • Combine with Access / WAF
  • Alternative to raw port forwarding

Tailscale homelab access

Reach PicoClaw only on your tailnet: stable DNS between Pi, NAS, laptop, and GPU box.

  • Private mesh, no public HTTP required
  • Split control-plane vs public webhooks
  • Great with local Ollama endpoints

Raspberry Pi 5 assistant

Extra CPU and I/O headroom for gateway + automation while PicoClaw still uses minimal RAM.

  • 64-bit Pi OS + ARM64 binary
  • Cloud or on-LAN models
  • Builds on the general Pi guide

GitHub Actions

Trigger LLM workflows from CI: push, schedule, or manual—thin runners posting to your PicoClaw API.

  • Secrets for auth, not raw LLM keys in YAML
  • Needs reachable URL or self-hosted runner
  • Complements n8n and cron patterns

What you can build with PicoClaw

PicoClaw is designed for lightweight AI automation: scheduled tasks, webhook‑driven agents, and background assistants that run where typical chatbots cannot. Use these guides as starting points and adapt them for:

  • Smart‑home and IoT monitoring on Raspberry Pi and other SBCs
  • Server‑side cron jobs that summarise logs, metrics, or reports using LLMs
  • On‑prem or air‑gapped environments that require local AI agents

When you are ready to go deeper, visit the Docs and Configuration reference.