OpenClaw AI Logo

Deploy OpenClaw AI: The Secure Ubuntu Server Guide

Stop risking your laptop's security. Learn the SysAdmin way to install, proxy, and secure your 24/7 autonomous agent on a Dedicated Bare Metal Server.

The Security Imperative: Why a Dedicated Server?

Many beginners run the famous curl -fsSL https://openclaw.ai/install.sh | bash command directly on their personal laptops. This is a massive security risk.

OpenClaw is an agent that requires deep shell access and file read/write permissions. If you fall victim to a Prompt Injection attack (e.g., an attacker hides "delete all files" in an email), your agent might execute rm -rf / on your personal machine.

The enterprise-grade solution is to host OpenClaw on an isolated Ubuntu Dedicated Server. By using strict Nginx rate limits, Docker sandboxing, and execution approval safeguards, you create a Zero-Trust fortress for your AI assistant.

Step 1: System Prep, Node.js & Docker

OpenClaw's Gateway relies heavily on modern JavaScript runtimes. It requires Node.js version 22 or higher. We also need to install Docker upfront so we can sandbox the AI agent's actions later.

# 1. Update the system
sudo apt update && sudo apt upgrade -y

# 2. Install essential tools and Docker
sudo apt install curl git build-essential nginx docker.io -y

# 3. Enable Docker to start on boot
sudo systemctl enable --now docker

# 4. Add the NodeSource repository for Node.js 22
curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -

# 5. Install Node.js
sudo apt install -y nodejs

# 6. Verify the installation
node -v
docker --version

Step 2: Installing the OpenClaw Daemon

We will bypass the generic automated script and use npm directly. This ensures the CLI is installed globally and gives us more control.

# 1. Install the OpenClaw package globally
sudo npm install -g openclaw@latest

# 2. Verify the installation path
openclaw --version

# 3. Run the onboarding wizard and register it as a background service
openclaw onboard --install-daemon

# NOTE: During setup, when asked to bind the gateway, choose "Localhost (127.0.0.1)"
# Do NOT expose it to 0.0.0.0 to prevent unauthorized public access.

# 4. Verify the Gateway status
openclaw gateway status

Step 3: Nginx Rate Limiting, Firewall & SSL

Exposing an AI gateway directly makes it vulnerable to brute-force attacks and spam. We must configure Nginx with Rate Limiting (limit_req) and WebSockets support. Create a new config: sudo nano /etc/nginx/sites-available/openclaw

# Define a rate limit zone to prevent API abuse (10 requests per second)
limit_req_zone $binary_remote_addr zone=ai_limit:10m rate=10r/s;

server {
    listen 80;
    server_name ai.yourdomain.com;

    location / {
        # Apply Rate Limiting
        limit_req zone=ai_limit burst=20 nodelay;

        proxy_pass http://127.0.0.1:18789;
        
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        # CRITICAL: WebSocket Support for OpenClaw UI
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        
        # Avoid timeouts during long AI code execution
        proxy_read_timeout 300;
        proxy_connect_timeout 300;
    }
}

Now, we must configure the UFW firewall. CRITICAL: Always allow OpenSSH before enabling UFW, or you will lock yourself out of your own server!

# 1. Enable Nginx site
sudo ln -s /etc/nginx/sites-available/openclaw /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl restart nginx

# 2. Configure Firewall (UFW)
sudo ufw allow OpenSSH   # CRITICAL: Do not skip this step!
sudo ufw allow 'Nginx Full'
sudo ufw enable

# 3. Secure with SSL (Certbot)
sudo apt install certbot python3-certbot-nginx -y
sudo certbot --nginx -d ai.yourdomain.com

Step 4: Secret Vaults & WhatsApp Integration

Placing API keys inside plain-text configuration files is highly dangerous. If your Web UI is ever compromised, info-stealing malware will scrape those keys. Instead, we use a permission-locked .env file.

# Create a secure environment file
echo "ANTHROPIC_API_KEY=sk-ant-your-secure-key" > ~/.openclaw/.env
echo "OPENAI_API_KEY=sk-your-openai-key" >> ~/.openclaw/.env

# Lock permissions so only the owner can read it
chmod 600 ~/.openclaw/.env

Next, configure OpenClaw's primary JSON file (~/.openclaw/openclaw.json) to define your AI model and enable the WhatsApp channel:

{
  "agents": {
    "defaults": {
      "model": { "primary": "anthropic/claude-sonnet-4-6" }
    }
  },
  "channels": {
    "whatsapp": {
      "enabled": true,
      "dmPolicy": "pairing",
      "allowFrom": ["+1234567890"]
    }
  }
}

Restart the daemon and pair WhatsApp using the QR code in the terminal: openclaw channels login --channel whatsapp

Step 5: Zero-Trust Security (Prompt Injection Defense)

To prevent an attacker from taking over your server via prompt injection, we must implement three final defenses in your openclaw.json: Gateway Token Auth, Docker Sandboxing, and Exec Approval.

{
  "gateway": {
    "mode": "local",
    "bind": "loopback",
    "auth": {
      "mode": "token",
      "token": "YOUR_STRONG_SECRET_TOKEN" 
    }
  },
  "agents": {
    "defaults": {
      "sandbox": { 
        "mode": "always",
        "engine": "docker",
        "network": false 
      },
      "tools": {
        "execApproval": true 
      }
    }
  }
}

What does this do?
1. auth.token: Requires a password to access your Web UI.
2. sandbox.engine: Forces all system commands to run inside an isolated, disposable Docker container.
3. execApproval: true: Enforces "Human-in-the-loop." If the AI decides to run a high-risk command (like formatting a drive or making a payment), it will pause and message you on WhatsApp asking for explicit approval before proceeding.

Apply changes and check your active logs for any errors: openclaw daemon restart && openclaw daemon logs

  Stop Paying Cloud API Fees

Running a 24/7 AI Agent connected to public APIs (OpenAI/Anthropic) will drain your budget due to constant heartbeat checks.

Host OpenClaw and run Local Models (DeepSeek/Llama) securely on our Enterprise Bare Metal GPUs.

OpenClaw Deployment FAQ

Why do I get a "Port 18789 already in use" error?

This means the OpenClaw daemon is already running in the background. You can check its status using openclaw daemon status or stop the existing process with openclaw daemon stop before starting a new session.

Can I run OpenClaw entirely offline without Anthropic/OpenAI?

Yes! This is the ultimate Sovereign AI setup. You can use Ollama or vLLM installed on a ServerMO Bare Metal GPU to run local models like Llama 3 or DeepSeek 32B. In your openclaw.json, you simply point the provider to your local http://127.0.0.1:11434 endpoint instead of a cloud API.

How do I monitor what my agent is doing?

You can view the real-time execution logs of your agent by running openclaw daemon logs in your terminal. For historical context, the agent saves its actions in the MEMORY.md file inside its workspace directory.

Ready to Launch with Unmatched Power?

Ready to Launch with Unmatched Power? Deploy blazing-fast 1–100Gbps unmetered servers, high-performance GPU rigs, or game-optimized hosting custom-built for speed, reliability, and scale. Whether it’s colocation, compute-intensive tasks, or latency-critical applications, ServerMO delivers. Order now and get online in minutes, fully secured, fully optimized.

Red and white text reads '24x7' above bold purple 'SERVICES' on a white background, all set against a black backdrop. Energetic and modern feel.

Power. Performance. Precision.

99.99% Uptime Guarantee
24/7 Expert Support
Blazing-Fast NVMe SSD

Christmas Mega Sale!

Unwrap the ultimate power! Get massive holiday discounts on all Dedicated Servers. Offer ends soon grab yours before the snow melts!

London UK (15% OFF)
Tokyo Japan (10% OFF)
00Days
00Hrs
00Min
00Sec
Explore Grand Offers