Docker Agent
An open-source runtime for AI agents. Define agents in YAML, give them tools, wire up multi-agent teams — and run them anywhere.
What Is Docker Agent?
Docker Agent is an open-source tool from Docker that lets you build, run, and share AI agents using simple configuration files instead of writing application code.
You describe what your agent does — its model, personality, tools, and teammates — in a YAML file. Docker Agent handles the LLM orchestration loop, tool execution, multi-agent delegation, and streaming output. You focus on what the agent should do, not how to wire it up.
# agent.yaml — this is all you need
agents:
root:
model: anthropic/claude-sonnet-4-5
description: A coding assistant
instruction: |
You are an expert developer. Help users write clean,
efficient code. Explain your reasoning step by step.
toolsets:
- type: filesystem
- type: shell
- type: think
$ docker agent run agent.yaml
That’s it. Your agent can now read and write files, run shell commands, and reason through problems — all through an interactive terminal UI.
See It in Action
Why Docker Agent?
Most AI agent frameworks ask you to write Python or TypeScript to glue together models, tools, and workflows. Docker Agent takes a different approach: declare everything in config, run it with a single command.
Config, Not Code
Define agents in YAML or HCL. Swap models, add tools, or change behavior without touching application code.
Built-in Tools + MCP
Comes with tools for filesystem, shell, memory, web fetch, and more. Extend with any MCP server — over 1,000 are available.
Multi-Agent Teams
Build teams of specialized agents that delegate work to each other. A coordinator routes tasks to the right specialist.
Any Model
OpenAI, Anthropic, Google Gemini, AWS Bedrock, local models via Docker Model Runner or Ollama — bring your own provider.
Package & Share Like Images
Push agents to any OCI registry. Pull and run them anywhere with one command — the same workflow you use for containers.
Run Anywhere
Interactive TUI, headless CLI, HTTP API server, OpenAI-compatible chat endpoint, MCP server, or A2A protocol.
How It Works
Docker Agent follows a simple loop:
- You define an agent in YAML — its model, instructions, tools, and sub-agents
- You run it with
docker agent runvia TUI, CLI, or API - The agent processes your request — calling tools, delegating to sub-agents, reasoning step by step
- Results stream back in real time
Zero Config
The fastest way to try it — no config file needed:
# Run the built-in default agent
$ docker agent run
From the Registry
Run pre-built agents from the agent catalog — just like pulling a Docker image:
# A pirate-themed assistant
$ docker agent run agentcatalog/pirate
# A coding agent
$ docker agent run agentcatalog/coder
Multi-Agent Teams
Build a team where a coordinator delegates tasks to specialists:
agents:
root:
model: openai/gpt-5
description: Team coordinator
instruction: Route tasks to the best specialist.
sub_agents: [coder, reviewer]
coder:
model: anthropic/claude-sonnet-4-5
description: Writes and modifies code
instruction: Write clean, tested code.
toolsets:
- type: filesystem
- type: shell
reviewer:
model: anthropic/claude-sonnet-4-5
description: Reviews code for quality
instruction: Review code for bugs, style, and best practices.
toolsets:
- type: filesystem
Non-Interactive Mode
Use --exec for scripting and automation:
# One-shot task
$ docker agent run --exec agent.yaml "Create a Dockerfile for a Node.js app"
# Pipe input
$ cat error.log | docker agent run --exec agent.yaml "What's wrong in this log?"
# Serve as an API
$ docker agent serve api agent.yaml --listen :8080
You can also write agent configs in HCL using labeled blocks and heredocs. See HCL Configuration.
Explore the Docs
Introduction
The full story: what Docker Agent is, why it exists, and how it works.
Quick Start
Get your first agent running in under 5 minutes.
Core Concepts
Agents, models, tools, and multi-agent orchestration explained.
Configuration
Full reference for every YAML and HCL option.
Model Providers
OpenAI, Anthropic, Gemini, Bedrock, Docker Model Runner, and more.
Features
TUI, CLI, API server, MCP mode, A2A, RAG, Skills, and distribution.