Last month I handed a task to OpenClaw at 11 PM: research every AI writing tool that launched in the past 30 days, draft comparison notes, and flag anything worth reviewing on computertech.co. I woke up to a full briefing in Telegram. No prompting. No babysitting. No context lost between steps.
That’s not magic. That’s Skills and Sub-Agents — OpenClaw’s two most underused features that most people gloss over in the setup docs and never revisit. If you’ve been running OpenClaw as a fancy chatbot that you have to manually kick into gear every time, this article is the unlock you’ve been missing.
We’ve been running OpenClaw in production for months. Skills and Sub-Agents are responsible for the majority of the autonomous work our setup actually does. Here’s how they work, why they matter, and exactly how to use them.
What Are OpenClaw Skills?
Think of Skills as pre-packaged expertise modules you slot into your OpenClaw agent. Without them, OpenClaw is smart but generic — it can chat, search, run commands. With Skills, it becomes a specialist.
A Skill is a folder containing a SKILL.md file. That file gives OpenClaw specific instructions, context, and procedures for a defined domain. When your message matches a Skill’s description, OpenClaw reads that SKILL.md and follows its instructions instead of improvising from scratch.
It’s the difference between asking a generalist “can you help me with SEO?” and handing a trained SEO strategist a brief. Same request, completely different output quality.
The Skill Discovery System
OpenClaw scans all installed Skills at the start of every session. Each Skill has a description field that defines when it activates. OpenClaw matches your intent against these descriptions and loads the most relevant one — no manual switching required.
The matching logic is smart enough to pick the most specific Skill for a given task. If you have both a general research Skill and a domain-specific seo-audit Skill, asking “audit my site’s SEO” loads the SEO one, not the generic research one.
From the OpenClaw docs:
If exactly one skill clearly applies: read its SKILL.md at the location, then follow it. If multiple could apply: choose the most specific one. If none clearly apply: do not read any SKILL.md.
This is elegant. Your agent gets smarter with every Skill you install, and you never have to manually invoke them. You just work normally.
How to Install Skills
Skills live in your workspace’s skills/ folder. There are three ways to get them:
1. ClawdHub (The Easy Way)
ClawdHub is the official Skill marketplace. The CLI makes it dead simple:
# Search for available skills
clawdhub search seo
# Install a skill
clawdhub install seo-optimizer
# List what you've got installed
clawdhub list
# Update everything at once
clawdhub update --all
Browse the full catalog at clawdhub.com. At last count there were 50+ community Skills covering everything from GitHub automation to Obsidian vault management to Google Search Console queries.
2. Manual Installation
Create a folder in ~/.openclaw/workspace/skills/your-skill-name/ and drop in a SKILL.md. That’s it. OpenClaw picks it up automatically on the next session — no config file edits, no restarts.
3. Writing Your Own
This is where it gets powerful. Custom Skills let you encode your exact workflows, your specific tools, your personal shortcuts. We’ll cover this in depth below.
Skills We Actually Use in Production
Here’s a snapshot of the Skills running in our setup and what they actually do. Not a theoretical feature list — real tools we depend on.
deep-research
Fires up a structured research workflow when we need to go deep on a topic. Rather than a single search-and-summarize pass, it decomposes the question into sub-questions, runs parallel searches, cross-references sources, and delivers a synthesis. Replaces about two hours of manual research per use.
We use this before writing any major comparison article — like our OpenClaw vs Auto-GPT vs AgentGPT breakdown — to get the raw intelligence before writing starts.
obsidian-daily
Connects OpenClaw to our Obsidian vault. Appends notes, logs decisions, and pulls from past entries without switching apps. The Skill handles relative dates (“yesterday’s notes”, “last Friday”) and vault navigation automatically.
gsc (Google Search Console)
Pulls live search performance data — top queries, CTR by page, impressions trends — directly into our workflow. We use this weekly to find articles losing ground and flag quick-win optimization targets. Way faster than logging into the GSC dashboard and clicking around.
seo-optimizer
Audits HTML files against on-page SEO factors: meta tags, schema markup, heading hierarchy, internal links, page speed signals. Fires automatically when we ask for an SEO review of any page. Outputs a prioritized fix list, not a wall of text.
github
Wraps the gh CLI for issue management, PR reviews, and CI run monitoring. When something breaks in a cron job, the first response is pulling recent commit history and CI logs without leaving the chat interface.
bird (X/Twitter)
Handles content scheduling and posting to our X account. Reads drafts, posts on command, searches recent AI news for content fodder. Combined with a cron job, it runs a lightweight social presence without manual intervention.
Writing a Custom Skill
This is the part that separates people who get real value from OpenClaw from people using it as a slightly smarter ChatGPT. Custom Skills let you encode your actual workflows.
Here’s the exact structure:
~/.openclaw/workspace/skills/
└── my-custom-skill/
└── SKILL.md
The SKILL.md format needs two things: a description for auto-detection and the actual instructions. Here’s a real example from our affiliate link audit workflow:
## Description
Use when the user wants to audit affiliate links on a blog post, check for broken
links, update outdated product references, or refresh monetization on existing content.
## Instructions
### Step 1: Audit Phase
1. Fetch the post URL provided
2. Extract all outbound links
3. Check each link for HTTP 200 status
4. Flag any 404s, redirects to homepages, or links to discontinued products
### Step 2: Research Phase
For each broken or outdated affiliate link:
1. Identify what product/service it was linking to
2. Search for current alternatives with active affiliate programs
3. Check Amazon Associates, ShareASale, or Impact for replacement links
### Step 3: Update Phase
1. Present findings with proposed replacements
2. On confirmation, update the WordPress post via WP-CLI
3. Submit updated URL to Google Indexing API
## Reference Files
- memory/accounts.md (affiliate program logins)
- memory/credentials.md (API keys)
That’s a real pattern we use. A Skill like this turns a 45-minute manual task into a single command.
What Makes a Skill Actually Good
The quality of a Skill comes down to specificity. Generic instructions produce generic results. The Skills that save real time are the ones that:
- Reference specific tools and CLIs (not “check the database” but “run
wp db querywith this format”) - Include error handling (“if the API returns 429, wait 30 seconds and retry”)
- Define what output looks like (“return a JSON list of broken links with proposed replacements”)
- Point to reference files for credentials and config — never hardcode anything
The OpenClaw GitHub repo includes a skill-creator skill that scaffolds new Skills. Worth using for complex workflows — it asks the right questions to make your SKILL.md precise.
What Are Sub-Agents?
Sub-Agents are where OpenClaw goes from “helpful assistant” to “autonomous operator.”
A Sub-Agent is an isolated OpenClaw session that your main agent can spawn, direct, and receive results from — all without your involvement. Your main session orchestrates; the Sub-Agents execute in parallel.
The mental model: you’re a project manager. Sub-Agents are your team. You hand out assignments, they execute independently, they report back. You’re not doing the work — you’re directing it.
Why This Changes Everything
Without Sub-Agents, every complex task is sequential. Research, wait. Write, wait. Format, wait. One thread of execution, one task at a time.
With Sub-Agents, five research threads run simultaneously while a drafting agent works from early findings and a third agent verifies outputs — all in parallel, all while your main session stays clean and available for whatever you actually want to be doing.
For content production specifically, this is the difference between one article per session and a full pipeline running autonomously in the background. We covered the mechanics of that pipeline in detail in our article on building an AI content pipeline with OpenClaw.
How Sub-Agents Work Under the Hood
Sub-Agents are spawned via OpenClaw’s internal orchestration layer. You don’t write code to use them — you describe what you want done and OpenClaw handles the spawning, context passing, and result aggregation.
Each Sub-Agent gets an isolated context. It doesn’t have your main session’s conversation history, which keeps it focused and prevents the context drift you get when a single long conversation tries to do too many different things.
Run Mode vs Session Mode
Two modes cover most use cases:
- Run mode — One-shot execution. Give it a task, it runs to completion, returns output. Best for discrete, well-defined work with a clear finish line.
- Session mode — Persistent thread. The Sub-Agent stays alive and receives follow-up instructions. Better for ongoing workflows or tasks that need iteration.
For most automation — research, drafting, data processing — run mode is what you want. Set it, forget it, collect results.
Sub-Agent Patterns That Actually Work
Here’s how Sub-Agents look in production. These are patterns we run regularly.
Pattern 1: Parallel Research
When we need coverage of a fast-moving topic, we spawn multiple research agents simultaneously — one per sub-topic — then have the main agent synthesize. Wall-clock time drops from hours to minutes.
Task: "Spawn 3 research agents in parallel:
- Agent 1: All AI writing tools launched in the past 14 days.
Capture name, pricing, key features, launch URL.
- Agent 2: Top AI discussions on HackerNews and r/MachineLearning
this week. Summarize top 5 threads.
- Agent 3: Pull GSC data for articles dropping in CTR.
List top 10 by impression loss.
Synthesize findings into a weekly intelligence brief."
This runs in the background. Results arrive in Telegram when complete.
Pattern 2: Draft + Review Pipeline
One agent drafts, a second reviews against our quality standards and flags issues. The main agent only gets involved to approve or request revisions. Quality goes up; effort per article goes down.
Pattern 3: Automated Site Maintenance
A weekly Sub-Agent checks all published articles for broken links, discontinued affiliate products, and pages returning non-200 status codes. It flags issues and queues fixes. The monitoring side of this is covered in our article on automated website monitoring with OpenClaw.
Pattern 4: Multi-Perspective Analysis
When evaluating a strategic decision, we spawn Sub-Agents with different analytical angles — pessimist, optimist, pure-data analyst. They evaluate independently. The main agent synthesizes. It’s a fast way to stress-test an idea without talking yourself into it.
Honestly? This one sounds gimmicky until you try it. Having a dedicated “what could kill this idea” agent that has no stake in the decision being good is weirdly effective.
Skills + Sub-Agents Together: The Actual Power Move
Sub-Agents inherit Skills. A research Sub-Agent with the deep-research Skill runs a structured research protocol, not a one-pass web search. A writing Sub-Agent with the copywriting Skill follows specific voice and SEO guidelines automatically.
This is how you build repeatable, high-quality autonomous workflows. Not by giving exhaustive instructions every time — by encoding expertise in Skills and delegating execution to Sub-Agents.
Skills are your operating procedures. Sub-Agents are the workers who follow them. Write good SOPs once, spin up any number of workers who execute them correctly — every time, at any hour, without supervision.
That’s the architecture behind what we describe in our full OpenClaw review when we talk about running an autonomous AI operation. Skills and Sub-Agents are the mechanism. The review covers why we chose OpenClaw over alternatives like those in our OpenClaw vs CrewAI comparison.
Setting Up Your First Skill (10 Minutes)
If you haven’t touched Skills yet, here’s the fastest path to value.
Step 1: Install ClawdHub
npm install -g clawdhub
Step 2: Install 3-5 Skills That Match Your Work
clawdhub search research
clawdhub search seo
clawdhub search wordpress
clawdhub install deep-research
clawdhub install seo-optimizer
clawdhub install pinch-to-post
Step 3: Test the Trigger
In your next OpenClaw session, ask something that should match the Skill. Watch how the response changes — you’ll see more structured, specific output compared to a generic query. If it doesn’t trigger, check the Skill’s description field for specificity.
Step 4: Create One Custom Skill
Pick one task you do manually and repeatedly. Write the procedure as a SKILL.md. Install it. Test it. Iterate once.
Start small. A Skill that encodes your preferred article outline structure or your WordPress publishing checklist is enough to immediately see the difference between generic responses and task-specific execution.
If you’re still setting up the fundamentals, our Windows installation guide and the cron jobs guide are worth reading first — Skills and Sub-Agents deliver the most value once your base config is solid.
Who Is This For?
Skills and Sub-Agents are built for OpenClaw users who want more than a smart chat interface. Specifically:
- Solo operators and content creators running autonomous publishing pipelines — if you want your agent researching, drafting, and updating articles while you sleep, this is the unlock.
- Technical founders and developers who need automated code review, PR summarization, or monitoring workflows without babysitting every step.
- Power users who’ve hit the ceiling of single-session AI — if tasks take hours or span multiple domains, Sub-Agents turn that into parallel minutes.
Not sure if OpenClaw is the right platform for you at all? Our roundup of the best AI agent platforms in 2026 compares the main contenders across automation depth, price, and ease of setup — so you can pick the right foundation before going deep on any specific feature set.
Common Mistakes to Avoid
Over-installing Skills
More isn’t always better. If you have 30 Skills installed, the detection logic has more potential conflicts to navigate. Install Skills you actually use. Prune idle ones quarterly.
Vague Skill Descriptions
A description that’s too broad fires the wrong Skill or doesn’t fire at all. Be specific: not “use when the user wants writing help” but “use when the user wants to rewrite the meta description and SEO title for a specific URL using our voice guidelines.”
Spawning Sub-Agents for Quick Tasks
Sub-Agent overhead — spinning up an isolated session, passing context, waiting for return — isn’t worth it for a 30-second task. Use Sub-Agents for work that genuinely benefits from parallelism or isolation. Quick lookups? Answer them in the main session.
Ignoring Model Matching
Sub-Agents running on your main session’s model (often a premium, high-cost model) for simple tasks burns money unnecessarily. Match model to task. Sub-Agents doing straightforward research or formatting work fine on faster, cheaper models. OpenClaw’s model alias system makes this a one-line config change.
Frequently Asked Questions
Do Skills work on all OpenClaw versions?
Skills are supported on OpenClaw 1.x and later. The SKILL.md format is stable — Skills you write today will work on future versions. Check the OpenClaw docs for breaking changes in major version upgrades.
Can Sub-Agents access my main session’s memory?
Sub-Agents run in isolated contexts and don’t have automatic access to your main session’s conversation history. They can access long-term memories stored in the memory system, workspace files you explicitly pass to them, and any tools and Skills they’re configured with.
How many Sub-Agents can I run simultaneously?
No hard cap in OpenClaw itself, but practical limits come from your LLM provider’s rate limits. In our experience, 3-5 parallel Sub-Agents is the sweet spot before you start hitting throttling on standard API plans. For heavier workloads, stagger spawning or upgrade your API tier.
Are Skills shared between users?
Skills are local to your OpenClaw instance by default. ClawdHub lets you publish Skills publicly and install community Skills, but nothing shares automatically. Your custom Skills with proprietary workflows stay on your machine.
What is the difference between a Skill and a system prompt?
A system prompt sets global behavior for every interaction. A Skill activates contextually for specific tasks and can be far more detailed than a system prompt allows. Skills can reference external files, run specific commands, and contain structured multi-step workflows. Think of Skills as task-specific operating procedures rather than personality settings.
Can Sub-Agents spawn their own Sub-Agents?
Technically yes — Sub-Agents have access to the same orchestration tools as your main session. In practice, deep nesting gets complex to debug and audit. One level of Sub-Agents under a main orchestrator covers most workflows and is much easier to reason about.
How do I debug a Skill that is not triggering?
Start with the description field — it needs to clearly describe specific scenarios for activation. Verify the SKILL.md is at the correct path (~/.openclaw/workspace/skills/skill-name/SKILL.md). If the Skill triggers but produces wrong output, make the instructions more specific. The OpenClaw GitHub issues tab has community discussion on Skill troubleshooting.
Are there Skills for tools like Shopify or HubSpot?
The ClawdHub catalog is growing but coverage varies. For tools with REST APIs, writing a custom Skill is straightforward — the Skill points to the API docs and handles auth setup. Check clawdhub.com for current availability and the GitHub discussions for community-built integrations.



