mirror of
https://github.com/ollama/ollama.git
synced 2026-04-18 09:03:35 -04:00
docs: update hermes (#15655)
This commit is contained in:
BIN
docs/images/hermes.png
Normal file
BIN
docs/images/hermes.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 1.4 MiB |
@@ -2,7 +2,9 @@
|
|||||||
title: Hermes Agent
|
title: Hermes Agent
|
||||||
---
|
---
|
||||||
|
|
||||||
Hermes Agent is a self-improving AI agent built by Nous Research. It features automatic skill creation, cross-session memory, and connects messaging platforms (Telegram, Discord, Slack, WhatsApp, Signal, Email) to models through a unified gateway.
|
Hermes Agent is a self-improving AI agent built by Nous Research. It features automatic skill creation, cross-session memory, and 70+ skills that it ships with by default.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
## Quick start
|
## Quick start
|
||||||
|
|
||||||
@@ -10,25 +12,56 @@ Hermes Agent is a self-improving AI agent built by Nous Research. It features au
|
|||||||
ollama launch hermes
|
ollama launch hermes
|
||||||
```
|
```
|
||||||
|
|
||||||
### Pull a model
|
Ollama handles everything automatically:
|
||||||
|
|
||||||
Before running the setup wizard, make sure you have a model available. Hermes will auto-detect models downloaded through Ollama.
|
1. **Install** — If Hermes isn't installed, Ollama prompts to install it via the Nous Research install script
|
||||||
|
2. **Model** — Pick a model from the selector (local or cloud)
|
||||||
|
3. **Onboarding** — Ollama configures the Ollama provider, points Hermes at `http://127.0.0.1:11434/v1`, and sets your model as the primary
|
||||||
|
4. **Gateway** — Optionally connects a messaging platform (Telegram, Discord, Slack, WhatsApp, Signal, Email) and launches the Hermes chat
|
||||||
|
|
||||||
|
<Note>Hermes on Windows requires WSL2. Install it with `wsl --install` and re-run from inside the WSL shell.</Note>
|
||||||
|
|
||||||
|
## Recommended models
|
||||||
|
|
||||||
|
**Cloud models**:
|
||||||
|
|
||||||
|
- `kimi-k2.5:cloud` — Multimodal reasoning with subagents
|
||||||
|
- `glm-5.1:cloud` — Reasoning and code generation
|
||||||
|
- `qwen3.5:cloud` — Reasoning, coding, and agentic tool use with vision
|
||||||
|
- `minimax-m2.7:cloud` — Fast, efficient coding and real-world productivity
|
||||||
|
|
||||||
|
**Local models:**
|
||||||
|
|
||||||
|
- `gemma4` — Reasoning and code generation locally (~16 GB VRAM)
|
||||||
|
- `qwen3.6` — Reasoning, coding, and visual understanding locally (~24 GB VRAM)
|
||||||
|
|
||||||
|
More models at [ollama.com/search](https://ollama.com/search?c=cloud).
|
||||||
|
|
||||||
|
## Connect messaging apps
|
||||||
|
|
||||||
|
Link Telegram, Discord, Slack, WhatsApp, Signal, or Email to chat with your models from anywhere:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
ollama pull kimi-k2.5:cloud
|
hermes gateway setup
|
||||||
```
|
```
|
||||||
|
|
||||||
See [Recommended models](#recommended-models) for more options.
|
## Reconfigure
|
||||||
|
|
||||||
### Install
|
Re-run the full setup wizard at any time:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
hermes setup
|
||||||
|
```
|
||||||
|
|
||||||
|
## Manual setup
|
||||||
|
|
||||||
|
If you'd rather drive Hermes's own wizard instead of `ollama launch hermes`, install it directly:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash
|
curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash
|
||||||
```
|
```
|
||||||
|
|
||||||
### Set up
|
Hermes launches the setup wizard automatically. Choose **Quick setup**:
|
||||||
|
|
||||||
After installation, Hermes launches the setup wizard automatically. Choose **Quick setup**:
|
|
||||||
|
|
||||||
```
|
```
|
||||||
How would you like to set up Hermes?
|
How would you like to set up Hermes?
|
||||||
@@ -84,32 +117,3 @@ Connect a messaging platform? (Telegram, Discord, etc.)
|
|||||||
Launch hermes chat now? [Y/n]: Y
|
Launch hermes chat now? [Y/n]: Y
|
||||||
```
|
```
|
||||||
|
|
||||||
## Recommended models
|
|
||||||
|
|
||||||
**Cloud models**:
|
|
||||||
|
|
||||||
- `kimi-k2.5:cloud` — Multimodal reasoning with subagents
|
|
||||||
- `qwen3.5:cloud` — Reasoning, coding, and agentic tool use with vision
|
|
||||||
- `glm-5.1:cloud` — Reasoning and code generation
|
|
||||||
- `minimax-m2.7:cloud` — Fast, efficient coding and real-world productivity
|
|
||||||
|
|
||||||
**Local models:**
|
|
||||||
|
|
||||||
- `gemma4` — Reasoning and code generation locally (~16 GB VRAM)
|
|
||||||
- `qwen3.5` — Reasoning, coding, and visual understanding locally (~11 GB VRAM)
|
|
||||||
|
|
||||||
More models at [ollama.com/search](https://ollama.com/models).
|
|
||||||
|
|
||||||
## Configure later
|
|
||||||
|
|
||||||
Re-run the setup wizard at any time:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
hermes setup
|
|
||||||
```
|
|
||||||
|
|
||||||
To configure just messaging:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
hermes setup gateway
|
|
||||||
```
|
|
||||||
|
|||||||
Reference in New Issue
Block a user