Skip to main content
Log in Sign up FREE!

Install Hermes Agent on a Linux server: VPS or dedicated

By ServerPoint's Team

Hermes Agent is Nous Research’s open-source self-improving AI agent. This guide walks through installing it on a Linux server, whether you’re starting on a modest VPS or going straight to a dedicated server for larger workloads. Covers Ubuntu, AlmaLinux, and Rocky Linux, plus Docker sandbox setup and a systemd service so it keeps running after you log out.

Time: about 15 minutes on a fresh server.

What you need

  • A Linux VPS (at least 2 GB RAM for cloud APIs; 8 GB+ for local Ollama models) or a dedicated server (more room for large local models and multiple isolated agents)
  • SSH access with sudo
  • An API key for your model provider (OpenRouter, Anthropic, OpenAI, Nous Portal, etc.), or plan to run local models with Ollama

Supported distros: Ubuntu 22.04 / 24.04, AlmaLinux 9, Rocky Linux 9. Debian 12 works too.

Step 1: Update your server

Ubuntu / Debian

sudo apt update && sudo apt upgrade -y
sudo apt install -y curl git

AlmaLinux / Rocky Linux

sudo dnf update -y
sudo dnf install -y curl git

Step 2: Install Hermes Agent

One installer command handles Python 3.11+, Node.js, and everything else:

curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash

Reload your shell so the hermes command is on your path:

source ~/.bashrc

Confirm it installed:

hermes --version

Step 3: Initial Hermes setup

hermes setup

The wizard asks:

  1. Model provider (Nous Portal, OpenRouter, Anthropic, OpenAI, local Ollama, or any OpenAI-compatible endpoint)
  2. API key for cloud providers
  3. Default model
  4. Storage location for skills, memory, logs (defaults to ~/.hermes)

To switch providers later: hermes model.

Step 4: Install Docker for sandboxed execution

Hermes can run shell commands and code. For any real workload, isolate that in Docker rather than running directly on the host.

Ubuntu / Debian

sudo apt install -y docker.io docker-compose-plugin
sudo systemctl enable --now docker
sudo usermod -aG docker $USER

Log out and back in (or newgrp docker) so your user runs docker without sudo.

AlmaLinux / Rocky Linux

sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo dnf install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
sudo systemctl enable --now docker
sudo usermod -aG docker $USER

Tell Hermes to use Docker for sandboxing:

hermes config set terminal.backend docker

Step 5: First run

hermes

You’re in the Hermes TUI. Try:

> what's the disk usage on this server, broken down by mount point?

Hermes plans, uses its shell tool, and reports back. The interaction is stored in its memory automatically.

Exit with Ctrl+D or /quit.

Step 6: Messaging gateway (optional)

To reach Hermes from Telegram, Discord, Slack, WhatsApp, or Signal:

hermes gateway

First run walks you through linking each platform. Telegram and Discord are easiest. You paste a bot token and the agent is reachable from your phone.

Step 7: Run Hermes as a systemd service

Critical step. Running hermes gateway in an SSH session dies the moment SSH drops. Use systemd so Hermes stays up through logouts and reboots.

Create /etc/systemd/system/hermes.service:

[Unit]
Description=Hermes Agent Gateway
After=network-online.target
Wants=network-online.target

[Service]
Type=simple
User=YOUR_USERNAME
WorkingDirectory=/home/YOUR_USERNAME
ExecStart=/home/YOUR_USERNAME/.local/bin/hermes gateway
Restart=on-failure
RestartSec=10

[Install]
WantedBy=multi-user.target

Replace YOUR_USERNAME with your Linux user. Confirm the ExecStart path with which hermes.

Enable and start:

sudo systemctl daemon-reload
sudo systemctl enable --now hermes
sudo systemctl status hermes

Hermes now runs 24/7, reconnects on crash, survives reboots.

Step 8: Scheduled jobs

Hermes supports natural-language scheduling. Examples:

> hermes cron "every weekday at 8 AM, summarize last night's server logs and send to Telegram"
> hermes cron "every hour, check all my websites for 200 responses and alert if any fail"
> hermes cron "every Sunday evening, generate a weekly commit summary across all my Git repos"

No crontab syntax needed. Hermes parses the schedule and runs the job unattended.

VPS vs dedicated server for Hermes Agent

Same installation, different scale.

Stick with a VPS if:

  • You’re running Hermes as a personal assistant or for a small team
  • You plan to use cloud model APIs (OpenRouter, Anthropic, OpenAI)
  • Your local-model needs are under 8 GB (3B-7B Ollama models)

A 4 GB VPS is plenty. Scale to 8 GB if you add local Ollama.

Move to a dedicated server if:

  • You want to run larger local models (13B, 34B, 70B) to avoid API costs on high-volume workloads
  • You’re running multiple isolated agents (one per client/project with separate skill libraries) on the same box
  • You want GPU support for fast local inference
  • You need heavy sandbox workloads: lots of nested containers, build pipelines, browser automation at scale
  • You want predictable performance without shared-host variance

A dedicated server with 32-64 GB RAM and NVMe storage runs Hermes plus Ollama with 13B-34B models comfortably. Add a GPU for larger models or higher throughput.

Security basics

  • Firewall. Only expose SSH and the ports your gateway actually needs.
  • Non-root user. Don’t run Hermes as root. Create a normal user, add to docker group, run Hermes as that user.
  • SSH key only. Disable password SSH in /etc/ssh/sshd_config.
  • Review approvals. Leave command approvals on for sensitive operations rather than auto-approving.

Optional: local models with Ollama

Install Ollama alongside Hermes:

curl -fsSL https://ollama.com/install.sh | sh
ollama pull llama3.2:3b

Point Hermes at Ollama via hermes model and pick “Ollama” or an OpenAI-compatible endpoint at http://localhost:11434.

Larger models need more RAM. 7B needs ~8 GB, 13B needs ~16 GB, 34B needs ~32 GB, 70B needs ~64+ GB. For serious local inference, a dedicated server makes more sense than scaling a VPS indefinitely.

Troubleshooting

hermes: command not found. Installer added binaries to ~/.local/bin. Run source ~/.bashrc or add to PATH.

Permission denied on Docker. Log out and back in after usermod -aG docker, or run newgrp docker in current shell.

Gateway disconnects. Set up systemd (step 7). Running hermes gateway in an SSH session dies with SSH.

Out of memory. Hermes plus local Ollama eats RAM. Pick a smaller model or upgrade the server.


Deploy a ServerPoint VPS or dedicated server and get Hermes Agent running in 15 minutes. On Windows? See Install Hermes Agent on Windows via WSL2.