How to Write Custom OpenClaw Skills: Build Your Own AI Automation in Minutes (2026)

OpenClaw Custom Skills — Build AI automation workflows with SKILL.md files

Why you can trust ComputerTech — We spend hours hands-on testing every AI tool we review, so you get honest assessments, not marketing fluff. How we review · Affiliate disclosure
Published March 19, 2026 · Updated March 19, 2026

Most people install OpenClaw, connect it to Telegram, set up a cron job or two, and call it a day. That’s fine. But the users who get the most out of it — the ones running full business operations on autopilot — have figured out the skills system.

Skills are how you teach OpenClaw to do your specific thing. Not generic AI tasks. Not whatever the default setup handles. Your workflows. Your tools. Your automations. A custom skill is essentially a SKILL.md file that tells OpenClaw exactly how to behave when a particular situation comes up — what steps to follow, what tools to use, what to avoid.

We’ve been building custom skills for months. Some are simple (five lines of instructions for a recurring task). Some are complex (multi-step research pipelines with error handling and fallback logic). This guide covers everything we’ve learned about writing them well, including the mistakes we made the first dozen times.

What Is an OpenClaw Skill?

Before getting into the how, it’s worth being precise about what a skill actually is. In OpenClaw’s architecture, a skill is a Markdown file — specifically a SKILL.md — that lives in your skills directory. When the agent detects a relevant situation (based on the skill’s description), it reads the SKILL.md and follows the instructions inside.

Think of it like a job description for a specialist. The general agent handles everyday tasks. But when a very specific type of work comes up, it hands off to the specialist — and the skill file is what tells the specialist what to do.

Skills can do almost anything OpenClaw can do natively:

  • Run shell commands on your server
  • Search the web and synthesize results
  • Post to WordPress, Discord, Telegram
  • Read and write files in your workspace
  • Spawn sub-agents for parallel work
  • Call external APIs
  • Schedule cron jobs
  • Chain multi-step workflows with conditional logic

The difference between a skill and a regular prompt is persistence and precision. A prompt lives in the conversation. A skill lives in the filesystem, fires reliably when triggered, and follows the same steps every time. For repeatable workflows, that’s the difference between something that sometimes works and something you can actually depend on.

The Anatomy of a SKILL.md File

Every skill has a few standard components. Understanding what each one does — and what happens if you get it wrong — saves a lot of debugging time.

The Description Block

This is the most important part of any skill. It’s what the agent reads to decide whether to activate the skill at all. Get this wrong and your skill never fires. Or worse — it fires when you don’t want it to.

The description goes at the top of the file and should be a tight paragraph covering:

  • What the skill does (the task or domain)
  • When to use it (specific triggers and situations)
  • Specific phrases or keywords that should activate it

Here’s a weak description:

# My Skill
This skill helps with research tasks.

And here’s a strong one:

# competitor-research
Use this skill when the user asks to research a competitor, analyze a rival product, 
check what a competing tool is doing, or monitor a competitor's pricing/features. 
Triggers on: "research [company]", "what is [competitor] doing", "compare us to [tool]", 
"check [competitor]'s pricing", "competitor analysis". Do NOT use for general web searches 
that aren't competitor-focused.

The strong version has triggers. It has anti-triggers. It has specificity. That’s what makes it reliable.

The Instruction Body

This is where you write the actual instructions. Think of it as writing a very precise SOP (standard operating procedure) for the agent. The agent will follow these steps literally, so precision matters more than brevity.

Good skill bodies have:

  • Numbered steps (the agent executes them in order)
  • Specific commands and tool calls where applicable
  • Conditional logic (“if X, do Y; if Z, do W”)
  • Explicit success criteria (“the task is done when…”)
  • Error handling (“if the command fails, try…”)
  • Output format instructions (“report results as…”)

References and Assets

Skills can reference other files — configs, templates, scripts — using relative paths from the skill directory. This is useful for skills that need to use consistent templates, load API credentials from a standard location, or run a specific script every time.

## Config
Load credentials from: ../credentials.md
Use template at: templates/report-template.html

The skill runner resolves these paths relative to the SKILL.md location, so keep your references clean and your file structure organized.

Step-by-Step: Building Your First Custom Skill

We’ll build a real skill from scratch — one we actually use: a keyword gap analyzer that checks what topics competitors are ranking for that we’re not covering yet.

Step 1: Identify the Repeatable Workflow

The first question to ask: “Am I doing this more than once?” If the answer is yes, it belongs in a skill. The keyword gap workflow happens every couple of weeks — we check a competitor URL, pull their ranking keywords, compare against our existing content, and output a list of gaps. Exactly the kind of thing that benefits from codification.

Write down the workflow in plain language before touching the skill file:

  1. Accept a competitor URL as input
  2. Fetch the site’s content and note key topics
  3. Search for what keywords that site ranks for
  4. List our existing articles (from WP)
  5. Find gaps — topics they cover, we don’t
  6. Output a prioritized list of article ideas

That’s your skill specification. Now convert it to instructions the agent can follow.

Step 2: Create the Skill Directory

In your OpenClaw workspace, skills live in the skills/ directory (or wherever your config points). Create a folder for your skill:

mkdir -p ~/.openclaw/workspace/skills/keyword-gap-analyzer

Then create the SKILL.md inside it:

touch ~/.openclaw/workspace/skills/keyword-gap-analyzer/SKILL.md

Step 3: Write the Description

Open SKILL.md and start with the description. This is critical — spend more time on this than you think you need to:

# keyword-gap-analyzer

Use this skill when asked to find keyword gaps, identify topics a competitor ranks for 
that we haven't covered, analyze a competitor's content strategy, or find content 
opportunities. Triggers on: "keyword gap", "what topics is [competitor] ranking for", 
"find gaps vs [site]", "what are we missing vs [competitor]", "content gap analysis".

Do NOT use for general SEO questions, keyword research without a competitor, 
or requests to check our own rankings.

Step 4: Write the Instruction Body

Now the meat of the skill:

## Instructions

1. Ask for the competitor URL if not provided. Do not proceed without it.

2. Fetch the competitor's homepage and up to 3 category pages using web_fetch. 
   Note the main topic clusters they cover.

3. Search for "[competitor domain] top ranking keywords" and "[competitor domain] 
   site:ahrefs.com OR site:semrush.com" to surface any publicly available ranking data.

4. SSH to 147.182.147.37 and run:
   wp post list --post_status=publish --format=csv --fields=post_title 
   --path=/var/www/wordpress --allow-root
   to get our current article list.

5. Compare the competitor's topic clusters against our article titles. Identify:
   - Topics they clearly cover that we have no article on
   - Topics where they have multiple articles and we have one or none
   - Any high-traffic keyword patterns (tool reviews, comparisons, guides) 
     that appear in their content but not ours

6. Output a prioritized list of 10 article ideas, formatted as:
   - Title suggestion (with year)
   - Target keyword
   - Why this gap matters (1 sentence)
   - Estimated difficulty: Low / Medium / High

7. Save the output to memory/keyword-gaps.md with today's date and the competitor URL.

Step 5: Add Error Handling

One thing that separates a good skill from a brittle one: it handles failure gracefully instead of silently stopping or hallucinating its way through.

## Error Handling

- If web_fetch fails on the competitor URL: note the error, try fetching /blog or /articles 
  subdirectory instead. If both fail, report that the site may block scraping and ask 
  for an alternative URL.
  
- If SSH command fails: report the error and continue with the competitor analysis only, 
  noting that we couldn't cross-reference our existing content.

- If no ranking data found via web search: note that in the output and base the gap 
  analysis on topic clusters from the fetched pages only.

Step 6: Test It

Save the file, then trigger it in an OpenClaw conversation:

Can you run a keyword gap analysis against backlinko.com?

Watch what happens. Does the agent activate the skill? Does it follow the steps? Where does it deviate? The first run usually surfaces at least one instruction that’s ambiguous or a step that needs more specificity. Iterate on the SKILL.md based on what you observe — not based on what you assumed would work.

Advanced Skill Patterns We Actually Use

Once you’ve got the basics down, a few patterns dramatically increase what skills can do.

The Multi-Phase Skill

Some workflows have distinct phases where the output of phase 1 determines what happens in phase 2. Skills handle this cleanly with explicit phase breaks:

## Phase 1: Research
[steps 1-4]

## Phase 2: Draft (only run if Phase 1 produced at least 3 viable topics)
[steps 5-8]

## Phase 3: Publish (only run if explicitly confirmed by user)
[steps 9-12]

The third phase having a confirmation gate is deliberate — we don’t want a research skill auto-publishing without a human checkpoint. Skills can be powerful enough to need explicit guardrails like this.

The Sub-Agent Delegation Pattern

For tasks that are truly parallelizable, skills can delegate to sub-agents. We use this for competitive analysis where we need to check five competitor tools simultaneously:

## Parallel Research
Use sessions_spawn to create 3 sub-agents simultaneously, one per competitor. 
Each sub-agent should:
- Fetch the competitor's pricing page
- Note any recent changes vs last known state in memory/competitors.md
- Return a one-paragraph summary

Wait for all sub-agents to complete before synthesizing the results.

This turns a 15-minute sequential process into a 5-minute parallel one. The OpenClaw skills and sub-agents guide covers the sub-agent system in more depth if you want to go further with parallel execution.

The Memory-Aware Skill

Skills that read from and write to memory files get dramatically smarter over time. Instead of starting from scratch every run, they accumulate context:

## Before Running
1. Check memory/active-projects.md for any context on current priorities
2. Check memory/lessons.md for relevant lessons from previous runs of this skill
3. Note: if this skill has been run before today, skip steps 2-4 to avoid duplication

## After Running  
1. Write key findings to memory/[relevant-file].md with date
2. Update memory/handoff.md with a one-line summary of what was done

The result is a skill that gets better with use. It knows what it found last time, what didn’t work, what’s already been covered. That’s the compounding advantage of the memory system — and it’s what makes a well-designed skill feel less like a script and more like a trained specialist.

Our guide on building a 24/7 AI employee with OpenClaw goes deeper on the memory architecture side if you want to see how we’ve structured long-term context across skills.

The Cron-Compatible Skill

Skills designed to run on a schedule need a few extra properties. They need to be self-contained (no waiting for user input), they need clear success/failure signals, and they need to communicate results through a channel rather than just returning text to a conversation.

## Cron Execution Notes
This skill is designed to run autonomously on a schedule. It will NOT ask for user 
input at any step. If any step requires information that isn't available, it should:
1. Log the issue to memory/errors.md
2. Send a Telegram notification via the message tool
3. Complete what it can and report partial results

## Output
After completing, send a summary to Telegram with:
- What was checked
- Key findings (bullet list, max 5 items)
- Any errors encountered
- Recommended action (if any)

The OpenClaw cron jobs guide covers the scheduling side in detail — skills and crons work best when designed together from the start.

The ClawdHub Skills Ecosystem

You don’t have to build every skill from scratch. ClawdHub (clawdhub.com) is the community hub for OpenClaw skills — think of it like npm but for AI automation recipes.

Installing a skill from ClawdHub is a single command:

clawdhub install [skill-name]

And checking for updates on all installed skills:

clawdhub update --all

The library includes skills for SEO workflows, social media automation, sales research, competitor monitoring, and more. What we’ve found useful: install skills from ClawdHub as starting points, then customize the SKILL.md for your specific situation. Most community skills are 80% of what you need — the last 20% is always specific to your setup, credentials, and workflows.

Publishing your own skills back to ClawdHub is also worth considering if you’ve built something generalizable. It takes about five minutes to package a skill for submission, and it’s a good way to contribute to a platform you’re getting value from.

Debugging Skills That Aren’t Working

Every skill breaks eventually. Usually the problem is one of three things:

The Skill Isn’t Activating

This means the description isn’t matching the trigger. The agent reads available skill descriptions and picks the most relevant one — if yours doesn’t match closely enough, it won’t fire.

Fix: Add more specific trigger phrases to your description. If you said “keyword research,” add “find gaps,” “what am I missing,” “content opportunities.” Cover the natural language variations of how someone would actually ask for the task.

The Skill Activates but Ignores Steps

This usually means the instruction body is too ambiguous. The agent is improvising instead of following your steps, which means the instructions are either unclear or contradictory.

Fix: Number your steps explicitly. Use action verbs at the start of each one (“Fetch”, “Run”, “Compare”, “Output”). Remove any instructions that could be interpreted multiple ways. If a step has a conditional, make both branches explicit.

The Skill Works Once but Fails Consistently

This is usually an environment issue — a command that works in one context fails in another, or a file path that’s correct on one machine is wrong on another.

Fix: Add error handling for every external call. Use absolute paths where possible. Add explicit checks (“verify the file exists before reading it”). And log failures to a consistent location so you can diagnose what went wrong after the fact.

Real Skills We Run in Production

To make this concrete, here are actual skills we have running on our setup (simplified descriptions):

content-engine: Checks AI news sources, identifies tool launches worth reviewing, drafts a WordPress article for any tool that’s genuinely new and newsworthy. Runs via cron daily. Has a publish gate so we review before anything goes live. The content pipeline guide documents how this fits into our broader publishing workflow.

site-monitor: SSHs to our DigitalOcean server, checks HTTP status on our top 20 pages, verifies SSL cert expiry, checks disk space, and pings if anything’s wrong. Five-minute cron. Has saved us from realizing a page was 404ing hours after the fact.

mcp-integrations: Handles connecting new MCP servers to our setup — walks through the config steps, tests the connection, documents what was added. The MCP integration guide covers the technical background on what MCP is and why it matters.

deep-research: A beefed-up research skill that spawns a Gemini-powered sub-agent for long-context research tasks that would burn too many Claude tokens. Accepts a topic, returns a structured brief with sources. Saves for later reference.

morning-brief: Fires at 6 AM, checks BTC price, pulls AI news headlines, summarizes the day’s scheduled events, and sends a Telegram message. About 40 lines of instructions. Runs every single day without fail.

None of these are magic. They’re just well-specified workflows encoded as SKILL.md files. The value isn’t in the AI being smart — it’s in you being precise about what you actually want done.

Tips for Writing Skills That Last

Skills that get written once and used for years have a few things in common. Skills that get rewritten every month have problems from the start.

Be specific about success. Every skill should define what “done” looks like. Not “analyze the competitor” — “output a prioritized list of 10 article ideas, saved to memory/keyword-gaps.md.” Vague outcomes produce vague results.

Version your major changes. If you substantially rewrite a skill, keep the old version as a comment or in a separate file. It’s surprisingly common to realize the old approach was actually better for a specific situation.

Document assumptions. If your skill assumes a certain file structure, certain credentials being available, or a certain server being accessible — say so explicitly. Future you (or a collaborator) will thank you when something breaks and there’s a clear record of what the skill expected to find.

Keep the scope tight. Skill creep is real. A skill that starts as “monitor the site” gradually accumulates “and also post to Twitter” and “and also check email” until it’s fragile and unpredictable. When a skill wants to grow, consider whether the new functionality should be a separate skill instead.

Test in isolation before connecting to crons. Run the skill manually and watch it execute before putting it on a schedule. Scheduled skills that fail silently are much harder to debug than interactive ones that fail visibly.

Who Should Be Writing Custom Skills?

Honest take: if you have any repeatable workflow that involves OpenClaw — any task you find yourself describing to the agent more than twice — you should encode it as a skill. The investment is 20-30 minutes of thinking and writing. The payoff is that thing happening reliably, the same way, every time, forever.

You don’t need to be a developer. Skills are Markdown files with plain English instructions. The closest coding equivalent is writing a recipe — sequential steps, conditional branches, clear inputs and outputs. If you can write a recipe, you can write a skill.

The users who get the most from OpenClaw aren’t necessarily the ones with the most technical background. They’re the ones who’ve thought hardest about what they actually want the agent to do and been precise enough to write it down. That’s the whole game.

If you’re just getting started with the platform, the full OpenClaw review covers the overall system, and the setup guide for Mac and Linux gets you from zero to running.

Frequently Asked Questions

What’s the difference between a skill and a system prompt?

A system prompt is always active and sets the general behavior of your agent. A skill is context-specific — it only activates when the agent detects a relevant situation. Skills are better for specialized workflows; system prompts are better for persistent personality and behavior defaults. Most setups use both: a system prompt for general behavior and skills for specific task types.

Can skills call other skills?

Not directly — a skill doesn’t call another skill by name. But a skill can include instructions that describe a situation that would trigger another skill, or it can spawn a sub-agent and give that agent instructions that match another skill’s trigger. In practice, keeping skills modular and non-dependent on each other is cleaner and easier to maintain.

How do I share credentials securely in skills?

Reference credential files by relative path (e.g., ../../memory/credentials.md) rather than hardcoding secrets in the skill body. This way the skill file can be shared or version-controlled without exposing secrets, and you update credentials in one place. Never put API keys or passwords directly in a SKILL.md file.

How many skills is too many?

There’s no hard technical limit, but the agent reads skill descriptions to pick the right one — if you have 50 poorly-described skills with overlapping triggers, the selection process gets messy. We have around 20 custom skills and keep them well-separated by domain. Quality over quantity: 10 precise skills beat 50 vague ones every time.

Can a skill run on a schedule automatically?

Yes — pair it with a cron job. Create the skill, then set up a cron job with a systemEvent payload that describes the trigger situation for that skill. When the cron fires, the agent reads the event, activates the matching skill, and executes it. This is how our morning brief, site monitor, and content engine all work.

Do skills work with all OpenClaw integrations?

Skills can use any tool available to the agent — which means any integration you’ve configured. A skill running in Telegram can send Telegram messages; a skill triggered from Discord can interact with Discord. The available tools depend on your OpenClaw configuration, not the skill itself. Skills are just instructions — the agent’s tool access is what determines what’s actually possible.

What’s the best way to learn skill writing?

Install a few skills from ClawdHub, read their SKILL.md files, and reverse-engineer the patterns you see. Real working skills are the best teachers. Then write your first custom skill for the smallest repeatable task you do, watch it run, and iterate from there. Most people get comfortable with skill writing within a week of their first attempt.

CT

ComputerTech Editorial Team

Our team tests every AI tool hands-on before reviewing it. With 126+ tools evaluated across 8 categories, we focus on real-world performance, honest pricing analysis, and practical recommendations. Learn more about our review process →