Complete Optimization Guide for 1 vCPU, 2 GB RAM
Generated: April 20, 2026 ยท Research by Babu AI
๐ Deep Research Report ยท 1 CPU ยท 2 GB RAM--max-old-space-size=1536 for the gateway, 768 for subagents.cron.maxConcurrentRuns=1. No parallel cron jobs.pruneAfter=7d, maxEntries=100.MemAvailable โ alert when it drops below 300 MB.Your choice of messaging channel is the single biggest runtime variable after config. Some channels maintain persistent WebSockets; others are purely webhook-driven. Here's the full breakdown for a 1 CPU / 2 GB VPS:
| Channel | RAM (MB) | Pattern | Verdict |
|---|---|---|---|
| webchat | 10โ30 | WebSocket / SSE per session | BEST Zero external dependency |
| Telegram (webhook) | 20โ40 | Webhook-driven, no polling | RECOMMENDED Very lightweight with webhook mode |
| Email (IMAP IDLE) | 15โ40 | Persistent IDLE connection | EXCELLENT Event-driven, near-zero polling |
| Slack (webhook) | 40โ80 | Webhook callbacks, no persistent socket | FINE Light if using Bolt webhook mode |
| Discord | 50โ120 | Persistent WebSocket gateway | VIABLE Moderate overhead from WebSocket state |
| Matrix | 50โ100 | Client-server WebSocket / long-poll | VIABLE Use Conduit/Dendrite over Synapse |
| SMS (Twilio) | 5โ20 | Webhook inbound, REST outbound | LIGHT Inbound is webhook-only, but costs per message |
| WhatsApp (official API) | 20โ50 | Webhook-driven | VIABLE Official API only โ unofficial libs are 150โ250 MB+ |
| WhatsApp (Baileys / unofficial) | 100โ250+ | Persistent WebSocket + session sync | AVOID Session state alone consumes 150+ MB |
| Signal | 150โ250+ | Signal protocol is CPU/RAM heavy | AVOID No official bot API; experimental workarounds |
Minimal overhead. No external API dependencies. No rate limits. No persistent connections to manage. Scales predictably with concurrent users.
Set up a webhook endpoint and Telegram sends you message events โ zero polling. Very low RAM footprint. Requires a public HTTPS URL though.
Every additional channel adds a persistent connection, its own event queue, and memory overhead. On 1 CPU / 2 GB: run exactly one channel.
On 1 CPU / 2 GB, install only what you use every day. A lean setup looks like:
Everything else: install on demand, then uninstall or disable after use.
| Skill | Why to Skip | RAM Impact |
|---|---|---|
| deep-research | Spawns 5โ8 parallel subagents, each running full model inference with long context. Can saturate 2 GB alone. | VERY HIGH |
| skill-creator | Runs heavy tooling, file operations, may trigger sandbox image builds. | HIGH |
| superpowers-planning / brainstorming / review | These are fine in isolation, but combined with other skills they add up. Use one at a time. | MEDIUM |
| Any Playwright/browser skill | Each browser instance consumes 100โ300 MB. With no sandbox memory limits enforced, this is dangerous. | VERY HIGH |
| blogwatcher | Runs continuous feed monitoring subprocess. Best avoided on constrained RAM. | MEDIUM |
Skills are loaded into the Node.js process at startup and kept resident. Each skill typically adds 5โ50 MB of RAM depending on dependencies. Loading 10 skills could consume 200โ400 MB of your 2 GB before doing anything.
openclaw skills listrm -rf ~/.openclaw/skills/<skill-name>The following openclaw.json changes are the most impactful for running lean on 1 CPU / 2 GB RAM:
{
"agents": {
"defaults": {
"maxConcurrent": 1, // <-- CRITICAL: one conversation at a time
"subagents": {
"maxConcurrent": 1, // <-- CRITICAL: one subagent task at a time
"maxChildrenPerAgent": 2, // limit parallel child subagents
"maxSpawnDepth": 1, // no recursive subagent spawning
"archiveAfterMinutes": 30 // auto-clean subagent sessions fast
},
"heartbeat": {
"isolatedSession": true, // stateless heartbeat โ no context carry
"lightContext": true, // only HEARTBEAT.md loaded
"every": "60m" // 60 min instead of default 30 min
}
}
}
}
{
"session": {
"maintenance": {
"pruneAfter": "7d", // evict sessions older than 7 days
"maxEntries": 100, // cap session store entries
"rotateBytes": 5000000, // rotate at 5 MB
"maxDiskBytes": "100mb" // per-agent session disk budget
},
"idleMinutes": 30,
"store": "jsonl"
}
}
{
"cron": {
"maxConcurrentRuns": 1, // <-- CRITICAL: never run cron jobs in parallel
"sessionRetention": "1h", // prune cron run sessions after 1 hour
"runLog": {
"maxBytes": 500000, // 500 KB max per cron run log
"keepLines": 500 // truncate aggressively
}
}
}
{
"agents": {
"defaults": {
"contextLimits": {
"memoryGetMaxChars": 6000, // truncate memory reads at 6K chars
"toolResultMaxChars": 8000, // truncate tool outputs at 8K
"postCompactionMaxChars": 900 // minimal context after compaction
},
"compaction": {
"reserveTokensFloor": 4000,
"truncateAfterCompaction": true // wipe raw transcript after summarization
},
"contextPruning": {
"mode": "cache-ttl",
"softTrim": { "maxChars": 2000 }
}
}
}
}
{
"logging": {
"level": "warn", // drop info/debug to reduce log volume
"maxFileBytes": 10000000, // 10 MB cap per log file
"consoleLevel": "error" // minimal console output
}
}
{
"agents": {
"defaults": {
"experimental": {
"localModelLean": true // drop heavyweight default tools
}
}
},
"tools": {
"profile": "minimal" // load only essential tools
}
}
The heartbeat runs a lightweight agent turn on a timer to check for pending work (emails, calendar, notifications). Each heartbeat:
Set heartbeat.every to "60m" โ the default 30 minutes is too frequent for a constrained box. On a lean VPS, checking every hour is plenty. Make sure lightContext: true is set so only HEARTBEAT.md is loaded, not the full workspace bootstrap.
maxConcurrentRuns above 1. Multiple cron jobs firing simultaneously is one of the most common OOM triggers.cron.sessionRetention: "1h" so completed cron run sessions don't accumulate.Each subagent process costs approximately 400 MB RSS. With a 2 GB total budget:
Keep maxConcurrent: 1 and maxChildrenPerAgent: 2 at most. On a truly lean 2 GB VPS, consider setting maxChildrenPerAgent: 1.
OpenClaw runs on Node.js. By default it may be configured with --max-old-space-size=2048 โ fine on a 4 GB+ machine, but deadly on 2 GB. You need to cap it so the Node heap doesn't consume the entire RAM, leaving no headroom for the OS, disk cache, or kernel buffers.
Set via environment or in your systemd service override:
NODE_OPTIONS="--max-old-space-size=1536"
For subagents specifically (if exposed via config):
--max-old-space-size=768
This leaves ~500 MB for the OS, buffer cache, and kernel โ critical on a 2 GB VPS.
Linux default swappiness=60 is too aggressive for a VPS where disk I/O is slow. When Node.js heap pages get swapped out, performance collapses.
# Set temporarily
sysctl vm.swappiness=10
# Set permanently
echo "vm.swappiness=10" >> /etc/sysctl.conf
Add a 1โ2 GB swap file as emergency headroom. Do NOT treat it as usable memory โ any process that hits swap on a VPS will effectively stall.
fallocate -l 2G /swapfile
chmod 600 /swapfile
mkswap /swapfile
swapon /swapfile
echo "/swapfile none swap sw 0 0" >> /etc/fstab
Set a hard memory limit on the OpenClaw gateway process via systemd. This makes the kernel prefer killing the OpenClaw process group via cgroup OOM before the system-wide OOM killer fires unpredictably.
# Create override
sudo systemctl edit openclaw-gateway
# Add:
[Service]
MemoryMax=2G
MemorySwapMax=1G
If you want subagents to be killed before the gateway when OOM fires, adjust the OOM score:
# In systemd override:
[Service]
OOMScoreAdjust=-200 # makes gateway less likely to be killed
| Metric | Command | Alert Threshold |
|---|---|---|
| Gateway RSS | ps aux | grep openclaw-gateway | Alert if > 1.2 GB |
| System MemAvailable | free -m | Alert if < 300 MB |
| Subagent RSS | ps aux | grep "node.*openclaw" | Each subagent > 500 MB = problem |
| CPU usage | top -bn1 | grep node | Alert if > 80% sustained |
| OOM events | dmesg | grep -i oom | Any OOM = immediate review |
| Disk for sessions | du -sh ~/.openclaw/agents/ | Alert if > 500 MB |
openclaw gateway status # gateway health, memory, active sessions
openclaw status # overall system status
openclaw logs # recent log entries
openclaw health # deep health check with token
systemctl --user show openclaw-gateway # MemoryCurrent vs MemoryMax
A minimal, lean monitoring stack for a 2 GB VPS:
MemAvailable โ the most important number. Alert when below 300 MB.dmesg | grep -i kill after any suspected OOM event.free -m and alert if MemAvailable < 350 MB.logging.maxFileBytes: 10000000 so logs don't consume disk.| Anti-Pattern | Impact | Fix |
|---|---|---|
| Multiple channels at once | RAM EXHAUSTION | Run exactly one channel. Telegram webhook alone is plenty. |
| Sandbox mode (Docker) | DOUBLES RAM FOOTPRINT | Set agents.defaults.sandbox.mode: "off" |
| deep-research skill | OOM TRIGGER | Never run on 2 GB. If needed, run manually in isolated session. |
| cron.maxConcurrentRuns > 1 | PARALLEL OOM SPIKE | Always set to 1 on constrained VPS. |
| WhatsApp + Baileys | 150โ250 MB+ | Use official WhatsApp Business API webhook only, or skip WhatsApp. |
| Signal | NO OFFICIAL API | Avoid entirely. No viable bot API, heavy encryption overhead. |
| Browser automation / Playwright | 100โ300 MB/browser | Never run on 2 GB without explicit memory caps. |
| Large channel history limits | CONTEXT BLOAT | Keep history limit โค 20 messages per channel. |
| Heartbeat every 30 minutes | FREQUENT CONTENTION | Set to 60 minutes on lean VPS. |
| Docker install on 1 GB VPS | PNPM OOM KILLED | Use non-Docker install. 2 GB minimum for Docker builds. |
If your VPS runs out of memory, here's the recovery sequence in order:
openclaw gateway stop && openclaw gateway start
This frees all held memory and resets session state.
dmesg | grep -i oom
Exit 137 = OOM kill confirmation.
find ~/.openclaw/cron/runs/ -type f -delete
session.maintenance.maxEntries and restart.openclaw.json, restart.agents.defaults.sandbox.mode: "off".cron.maxConcurrentRuns: 1.rm -rf ~/.openclaw/media/
openclaw gateway status && free -m
Run this after every restart until stable.
agents.defaults.maxConcurrent: 1agents.defaults.subagents.maxConcurrent: 1cron.maxConcurrentRuns: 1heartbeat.every: "60m" with lightContext: trueagents.defaults.sandbox.mode: "off"session.maintenance.pruneAfter: "7d"session.maintenance.maxEntries: 100logging.level: "warn"NODE_OPTIONS="--max-old-space-size=1536" setvm.swappiness=10 setdeep-research, skill-creator, browser skills NOT installedOne channel + Sequential sessions + No sandbox + 1536 MB heap cap + Swap + 60-min heartbeat = survivable on 2 GB.
Any deviation from this formula requires compensating adjustments elsewhere. The tighter you run, the more headroom you have for the work that actually matters.
Research compiled by Babu AI for Thota ยท April 20, 2026 ยท OpenClaw on Lean VPS Project