Rapidsecureclaw connects any AI model to any messaging platform — Telegram, Discord, Slack, WhatsApp, iMessage, Matrix, and more — in a single <8 MB Go binary.
Connect to any platform your users live on
No Docker, no npm, no runtime. Build one binary, drop in a config file, and start talking to every platform at once.
Compile a single static Go binary under 8 MB with zero runtime dependencies. CGO is disabled by default.
One YAML file enables any combination of channels and AI providers. Hot-reload means no restarts needed.
One command starts all configured channels simultaneously. Your AI is live everywhere instantly.
OpenClaw runs on Node.js and needs npm + ~300 MB of node_modules. Rapidsecureclaw ships as one static binary with zero runtime dependencies.
| Metric | Rapidsecureclaw (Go) | OpenClaw (Node.js) | |
|---|---|---|---|
| Binary / install size | < 8 MB | ~300 MB | 37× smaller |
| Cold-start time | < 50 ms | 600–1 200 ms | 20× faster |
| Memory at idle | ~12 MB | 80–150 MB | 10× less |
| Runtime dependency | none | Node.js ≥ 18 | zero deps |
| Concurrent connections | 100 000+ | ~10 000 | 10× more |
| Hot-reload config | ✓ fsnotify | manual restart | built-in |
| Cross-compile | GOOS/GOARCH | platform-specific | trivial |
| IoT / MQTT | built-in client | extra package | native |
| Mobile push (APNs/FCM) | built-in | not included | native |
Every component is built with raw stdlib where possible, one well-chosen library where necessary.
Typed JSON frames over WebSocket. Bearer token auth, CORS, rate-limiting, and ping/pong keepalive all included.
Tool-use loop with unlimited tools. Built-in security agent scans code for vulnerabilities, secrets, and risky deps.
fsnotify watches your YAML. Change a model, toggle a channel, rotate a token — no restart needed.
APNs (iOS) and FCM (Android) push with JWT signing. No Firebase SDK, no dependency bloat.
Full MQTT 3.1.1 client from scratch using net/bufio. Publish AI responses back to IoT topics automatically.
Per-user conversation history with configurable TTL, max-history cap, and optional disk persistence.
Anthropic, OpenAI, or Ollama (offline). Switch models per-session or per-agent without touching channel code.
Messages to disconnected devices held in-memory or persisted queue with TTL and size caps.
Polls Obsidian vault via Local REST API. Detects modified notes and appends AI responses inline.
Each channel is a self-contained adapter. Enable what you need in config.yaml — one line to activate, one line to disable.
Channel → Session Manager → Provider → Agent. Each layer is independently swappable.
One YAML file. One binary. Unlimited conversations across every platform your users live on.