OpenClaw Advanced Configuration Guide: Workspace Files, Custom Personas and Multi-Model Routing (2026)

OpenClaw Advanced Configuration — workspace files, custom personas, and multi-model routing guide

Why you can trust ComputerTech — We spend hours hands-on testing every AI tool we review, so you get honest assessments, not marketing fluff. How we review · Affiliate disclosure
Published March 22, 2026 · Updated March 22, 2026

You installed OpenClaw. It works. But right now, it’s still a generic AI assistant — it doesn’t know your priorities, your communication style, your tools, or what “just handle it” means to you. That’s not a limitation of the software. That’s a configuration problem. And it has a very specific solution.

OpenClaw’s workspace configuration system is what separates a useful chatbot from something that operates like a real business partner. Once you understand how AGENTS.md, SOUL.md, context injection, and multi-model routing work together, the gap between “AI that follows instructions” and “AI that knows what you actually want” closes fast. This guide is that walkthrough.

We’ve been running OpenClaw as our primary AI operations layer for months now. The setup we’re about to describe isn’t theoretical — it’s the exact configuration we use. Some of it took trial and error to get right. You’re getting the compressed version.


Why the Default OpenClaw Setup Leaves Value on the Table

Fresh out of the box, OpenClaw gives you a capable AI assistant. You can ask it questions, run commands, browse the web, send messages. It’s useful. But it’s like hiring a contractor who shows up on day one with no briefing — smart, willing to work, but constantly asking “what do you want me to do?” and making judgment calls that don’t match how you actually operate.

The workspace configuration system fixes this. Think of it as the difference between a new employee on day one versus one who’s been with you for six months. The six-month employee knows your priorities without asking. They know which problems to escalate and which to just handle. They know your communication preferences. They have context.

OpenClaw’s config files are how you give your AI that context — permanently, without repeating it every session.


The Workspace Configuration Architecture

Everything lives in your workspace directory. On Windows, that’s typically C:\Users\[username]\.openclaw\workspace. On Mac/Linux it’s ~/.openclaw/workspace. OpenClaw injects these files as context at the start of every session — before you say a single word.

The core files are:

  • AGENTS.md — Operational instructions. What to do, what to not do, how to prioritize.
  • SOUL.md — Persona, voice, and philosophy. How the AI presents itself and interacts with you.
  • USER.md — Everything about you: schedule, priorities, technical level, communication preferences.
  • TOOLS.md — A reference map to your operational files (credentials, infrastructure, accounts).
  • IDENTITY.md — Optional. Custom name, avatar, creature type. More than cosmetic — it shapes how the AI conceptualizes itself in relation to you.

There’s also a memory/ directory for persistent files the AI writes and reads — but that’s covered in our OpenClaw Memory System deep dive. Here we’re focused on the config layer above that.


AGENTS.md: The Operational Brain

This is your most important config file. AGENTS.md tells OpenClaw what it’s responsible for, how it should prioritize, and — critically — what it’s allowed to do without asking you first.

Here’s what a well-structured AGENTS.md section looks like in practice:

## Autonomous Permissions

**JUST DO IT:**
- Update published articles (stats, links, meta, SEO)
- Draft articles on major launches
- Submit to Google Indexing API after publish
- Create/modify crons, fix bugs immediately
- Research sub-agents anytime

**ASK FIRST:**
- Publishing in interactive sessions
- Spending money beyond free tiers
- Strategic pivots, destructive actions

That permission structure is doing serious work. Without it, OpenClaw defaults to asking permission for everything — which is the right call when it doesn’t know your preferences, but gets exhausting fast. With a clear “just do it” list, it handles routine operations without interrupting you.

The other critical section is priorities. We use this pattern:

## Every Session
1. Check config/rotation-tracker.json — context for my availability
2. Check memory/corrections.md — what went wrong before
3. Check SYSTEMS.md before any task

This means every session starts with OpenClaw already knowing what to look at. You don’t have to brief it. It briefs itself.

Defining Responsibilities

AGENTS.md lets you define explicit responsibility domains. In our setup, we have:

  • Content Intelligence — monitor AI tool launches, draft reviews, track competitors
  • Strategy — maintain big picture, surface neglected projects, zoom out when we’re deep in tactics
  • Technical — OpenClaw config, site maintenance (default stack specified so it doesn’t guess)
  • Operations — morning cron: BTC price, AI launches, calendar, tasks, one suggestion

Each domain has enough specificity that OpenClaw can operate within it without constant direction. “Strategy” isn’t vague here — it means “surface neglected projects and flag off-track progress.” That’s actionable.

Thinking Patterns Section

One underrated section is what we call “Thinking Patterns” — explicit instructions for how to reason, not just what to do:

## Thinking Patterns
Root cause > symptoms. Zoom out when tactical. Check quiet projects.
Assess external events. Automate manual+repeatable.
Voice of reason on new ideas. Think distribution after creation.
Anticipate next question. Suggest next step.

This shapes how OpenClaw approaches problems. “Root cause > symptoms” means when something breaks, it doesn’t just patch the surface — it asks why it broke. “Think distribution after creation” means after you publish something, it surfaces sharing and amplification options without being asked.


SOUL.md: Building a Persona That Actually Fits

Here’s what other OpenClaw guides don’t tell you about SOUL.md: it’s not just cosmetic. The persona configuration directly affects the quality of pushback, the style of communication, and how the AI handles ambiguity.

Most people either skip SOUL.md entirely (so their AI sounds like a generic chatbot) or fill it with vague words like “professional but friendly.” Neither works. You want specificity.

Our SOUL.md defines:

  • Philosophy — not just personality, but the operating principles. “Owner, not employee. Obvious move → do it, report it.” That one line eliminates a huge category of unnecessary questions.
  • Anti-sycophancy directive — explicitly telling OpenClaw to push back on weak ideas, disagree when warranted, and not soften criticism to keep the peace. This is non-obvious but critical. Without it, AI assistants trend toward validation regardless of whether you’re right.
  • Communication rules — length defaults, formatting preferences, when to use bullets vs prose, how to handle technical depth.
  • Hard limits — what it will never do. No fabrication. No sharing personal data. No sycophancy.

The persona isn’t about making the AI “fun.” It’s about making interactions efficient. When OpenClaw knows you want directness and brevity, you stop getting three-paragraph responses to questions that need two sentences.

The Anti-Sycophancy Problem

This deserves its own mention because it’s a genuine failure mode. By default, large language models optimize for responses that feel good to read. They’ll validate a bad plan with qualifications instead of just telling you the plan is bad. Over time, that turns your AI assistant into a yes-machine — which is arguably worse than having no assistant at all, because you start trusting the validation.

In SOUL.md, we have an explicit, aggressive directive against this:

Anti-sycophancy is NON-NEGOTIABLE. Real pushback — not softened disagreement,
not "great idea but..." — actual friction when something doesn't add up.
Going broke chasing a bad idea because nobody pushed back = the real risk.

Write something like this in your own SOUL.md. It changes the dynamic significantly. OpenClaw will disagree with you. It will point out when you’re wrong. That’s the feature.


USER.md: Giving OpenClaw the Context It Actually Needs

USER.md is where you put everything about yourself that’s relevant to how OpenClaw should operate. This isn’t about privacy — it’s about eliminating friction. The more OpenClaw knows, the less it has to ask.

What to include:

Schedule and Availability

We run a shift schedule (week on, week off, 12-hour shifts). That context is in USER.md, plus a reference to a separate config/rotation-tracker.json file that tracks which shift we’re on. Result: OpenClaw knows when to be proactive versus when to queue things for later without us having to say “I’m on nights this week.”

Week-on/week-off, 12hr shifts always 6-6 (days/nights alternate).
Schedule: config/rotation-tracker.json.
On-shift = limited availability. Off-shift = deep work.

Technical Level

Be honest here. If you’re comfortable with CLI but not a developer, say that. If you know Python but not JavaScript, say that. This affects how OpenClaw explains things, what solutions it suggests, and how much it assumes you know.

Priorities and Projects

List your active projects with a one-line description of each. Include which ones are primary focus versus backburner. This prevents OpenClaw from treating a side project with the same urgency as your main revenue source.

Communication Preferences

Preferred channel (Telegram, Discord, email). How to handle non-urgent things when you’re busy. When proactive check-ins are welcome versus annoying. This sounds minor until you’re getting pinged about something that could’ve waited and you’re in the middle of something else.


Multi-Model Routing: Using the Right Model for Each Job

OpenClaw supports multiple AI models, and smart routing — using the right model for each type of task — is one of the highest-leverage config decisions you can make. This is covered briefly in most guides but not with enough specificity to actually be useful.

The principle: not every task requires your most powerful (and expensive) model. Using a heavy reasoning model for simple formatting tasks is like using a sledgehammer to hang a picture frame. Using a weak model for complex orchestration is the opposite problem.

Our Routing Framework

We document this in AGENTS.md under a Models section:

## Models
Opus = complex reasoning/orchestration.
Sonnet/Codex = parallelizable builds.
Gemini = research/images.
Sub-agents MUST use Sonnet.

Breaking this down:

  • Complex reasoning/orchestration (strategic analysis, synthesizing multiple sources, writing long-form content that requires judgment) → highest-capability model
  • Parallelizable builds (coding tasks that are spec’d out and just need execution, writing tasks with clear templates) → faster/cheaper model
  • Research and images → Gemini’s search grounding and image generation are genuinely better for these tasks than alternatives
  • Sub-agents → always use a defined model, not the default. Sub-agents spawned without a model spec will use whatever the system default is, which may not be what you want

Switching Models in Session

You can change the active model mid-session using /model. More useful is setting it in config so the right model starts by default for different session types. This integrates with OpenClaw’s gateway configuration — see the defaultModel and model settings in your gateway config.

If you’re running cron jobs that run unattended, specify the model explicitly in the job payload rather than relying on session defaults. Cron jobs running on an expensive model when a cheaper one would do fine is a real cost problem at scale.


The Memory Directory: Persistent Context Across Sessions

The workspace memory/ directory is where OpenClaw reads and writes files that carry context across sessions. The config layer sets up who OpenClaw is and what it’s supposed to do — the memory layer is what it knows about your current situation.

Key files to establish:

  • memory/handoff.md — Written at the end of complex sessions. Captures decisions made, current state, what’s in progress. The next session starts by reading this.
  • memory/active-projects.md — Running state of all active work. Updated continuously during work sessions.
  • memory/corrections.md — What went wrong, what was corrected, what should never happen again. Every session checks this first.
  • memory/lessons.md — Positive learnings. Things that worked. Approaches to repeat.

The AGENTS.md instruction to write to these files immediately — not at the end of the session, not when asked — is what makes the memory system actually reliable. “Write IMMEDIATELY: decisions → handoff.md, plans → active-projects.md, lessons → lessons.md.” Without that instruction, the AI will mean to update the files and then not get around to it.

For a complete breakdown of how the memory system works under the hood, including Mem0 integration and LCM (Lossless Context Management), see our OpenClaw Memory System guide.


Build Workflow Configuration: The Two-Pass System

If you’re using OpenClaw for software development (even small tools and scripts), this is worth configuring explicitly in AGENTS.md. We run a two-pass system:

## Build Workflow
Pass 1 = spec (use high-reasoning model + SPEC_GENERATOR_PREPROMPT.md)
Pass 2 = build from spec (fresh session)
Never skip. Never combine.

Pass 1 is pure planning — no code written. The spec captures requirements, edge cases, data structures, and the approach. Pass 2 executes against that spec in a fresh context, not contaminated by the planning discussion.

This sounds like extra work. It’s actually faster. Skipping the spec means you end up with code that half-works, then spending twice as long debugging something that was architecturally wrong from the start. The spec catches that before a single line is written.

You can store your spec generator prompt as a file in config/ and reference it in AGENTS.md. OpenClaw will use it when you ask for a spec without you having to paste the prompt each time.


Context Files Beyond the Core Four

The core files (AGENTS.md, SOUL.md, USER.md, TOOLS.md) are injected automatically. But you can reference additional context files from within those files and have OpenClaw load them on demand.

Files we keep in config/:

  • config/rotation-tracker.json — Shift schedule data. AGENTS.md tells OpenClaw to check this every session.
  • config/ELITE-STRATEGIST-FRAMEWORK.md — Content quality framework. Referenced in content-related instructions.
  • config/ARTICLE-QUALITY-STANDARDS.md — Writing standards. Read before any article is published.
  • config/SPEC_GENERATOR_PREPROMPT.md — System prompt for spec generation in Pass 1 builds.

The pattern: core files stay lean and point to detail files. AGENTS.md doesn’t contain all the content quality rules — it says “read config/ARTICLE-QUALITY-STANDARDS.md before writing anything.” This keeps the always-loaded context focused on the most important instructions, while detailed references are loaded when actually needed.


Platform Formatting Configuration

OpenClaw operates across multiple channels — Telegram, Discord, email, terminal. Each has different formatting conventions. You can configure default behavior per platform in AGENTS.md:

## Platform Formatting
Discord/WhatsApp: bullet lists, no tables. Discord links: wrap in <>.
WhatsApp: **bold** or CAPS, no headers.
Telegram: standard markdown supported.

Without this, OpenClaw will use a one-size-fits-all formatting approach. Tables look fine in terminal but break in mobile messaging apps. Markdown headers are irrelevant in Discord. Getting this right means every response is appropriately formatted for where it’s being read without having to specify it each time.

We cover the Telegram and Discord setup in detail in our integration guides: Telegram setup and Discord and Slack setup.


Parallel Agent Configuration

OpenClaw can spawn sub-agents for independent tasks — running multiple research threads simultaneously, building while you have a separate conversation, handling background work without blocking your main session. But you need to configure when this should and shouldn’t happen.

Our rule: “Independent tasks → sessions_spawn (not sequential). Don’t spawn for quick tasks or tasks needing current context.”

What this prevents: OpenClaw spawning a sub-agent for a 30-second task that it could handle inline, creating unnecessary overhead. Or spawning a sub-agent for something that requires the context of your current conversation, then getting an answer from a sub-agent that doesn’t know what you’ve been discussing.

The sub-agent model specification matters here too. Our rule — sub-agents MUST use Sonnet — exists because an orchestrator running Opus can spawn Sonnet sub-agents for the heavy lifting, keeping costs predictable. Without that constraint, sub-agents default to whatever the system model is, which may be expensive at scale.

For a complete breakdown of how Skills and sub-agents work, see our OpenClaw Skills and Sub-Agents guide.


The Correction Protocol: Learning From Mistakes

The most underused configuration pattern: a formal correction protocol. When something goes wrong, most people just correct it in the moment and move on. The problem recurs. Then you correct it again. OpenClaw doesn’t have a persistent memory of that correction unless you give it one.

Our protocol:

## Correction Protocol
Corrected → acknowledge → root cause → write memory/corrections.md
→ concrete edit. Once, never again.

When OpenClaw makes an error, it:

  1. Acknowledges what went wrong
  2. Identifies the root cause (not just the symptom)
  3. Writes a correction entry to memory/corrections.md
  4. Makes a concrete change so it doesn’t happen again

The “once, never again” principle is the key. It reframes errors from individual incidents to system improvements. If you’re correcting the same thing twice, that’s a failure of the protocol — the correction wasn’t written down or wasn’t specific enough.

This integrates with checking memory/corrections.md at the start of every session. The correction isn’t just logged — it’s actively reviewed before new work begins.


Advanced Config Patterns Worth Stealing

The “10% Rule” for Memory

We have this instruction in AGENTS.md: “10% chance it matters = WRITE IT.” This is a bias toward over-writing to memory rather than under-writing. The cost of writing something that turns out to be unnecessary is low. The cost of not writing something that turns out to matter is high — you’ve lost context and have to reconstruct it.

Session Length Triggers

“After 2+ hours of complex work → suggest /new, log to handoff.md.” Context windows degrade in quality over very long sessions. This instruction has OpenClaw proactively suggest a fresh session before quality drops, rather than silently degrading.

The “Zoom Out” Instruction

One of the most valuable single lines in our AGENTS.md: “Zoom out when tactical.” When we get deep in the weeds of a specific problem, OpenClaw will periodically surface the bigger picture. Is this task still aligned with the priority? Is there a different approach that solves the root problem instead of the symptom? This prevents the trap of optimizing something that shouldn’t exist.

External Event Monitoring

“Assess external events” in the Thinking Patterns section means OpenClaw is constantly connecting what we’re working on to what’s happening in the world — market conditions, competitor moves, technology shifts. Most AI assistants are laser-focused on the immediate task. This instruction keeps one eye on the broader context.


Common Configuration Mistakes

Permission Scope That’s Too Broad or Too Narrow

“Do whatever you think is best” is too broad. You’ll end up with surprises. “Ask me before every action” is too narrow — you become a bottleneck for everything and lose the autonomy benefits. The goal is a permission structure specific enough to give OpenClaw confidence on routine tasks while preserving your judgment for consequential decisions.

Vague Priority Instructions

“My main project is the most important” is useless. “Priority: project X > project Y (background) > everything else” — with a reason — is useful. The reason matters because it helps OpenClaw make judgment calls when two priorities conflict in ways you didn’t anticipate.

Not Specifying Defaults

What tech stack should OpenClaw suggest for new builds? What email should it use for signups? What search tool should it use by default? Without these specifications, every session involves re-establishing basics. We have “default stack: Next.js 14 + TS + Tailwind + shadcn/ui” in AGENTS.md. One line, never have to say it again.

Skipping SOUL.md

The instinct is to treat SOUL.md as optional vanity. It’s not. The persona configuration affects reasoning style, communication density, and critically — the willingness to push back. A well-configured SOUL.md gives you a collaborator. A missing or generic one gives you a very sophisticated autocomplete.


Putting It Together: A Configuration Checklist

When setting up or overhauling your OpenClaw configuration, work through this order:

  1. USER.md first — Who are you, what’s your schedule, what are your priorities, what’s your technical level. This is the foundation everything else builds on.
  2. SOUL.md second — Persona, philosophy, communication rules, anti-sycophancy directive. Define the relationship dynamic.
  3. AGENTS.md third — Responsibilities, autonomous permissions, thinking patterns, model routing, build workflow, correction protocol. This is the operational layer.
  4. TOOLS.md fourth — Reference map to your operational files. Keeps core files lean by pointing to detail files.
  5. memory/ structure — Create handoff.md, corrections.md, active-projects.md, lessons.md as empty files. OpenClaw will populate them.
  6. config/ files — Any supplementary files referenced from AGENTS.md (frameworks, preprompts, schedule data).

The initial setup takes a few hours if you’re being thorough. The return on that investment compounds — every session after that starts with a briefed, context-aware AI that knows how you operate.

If you haven’t installed OpenClaw yet, our Windows setup guide and Mac/Linux setup guide cover the installation side. This guide picks up where those leave off.


Frequently Asked Questions

How often should I update my OpenClaw config files?

Update them whenever something changes in how you want OpenClaw to operate — new projects, shifted priorities, communication preference changes, lessons from errors. The correction protocol handles small updates automatically. For bigger changes (new project priority, role shift, new tool stack), update the relevant file directly. We review and prune our config files roughly monthly to remove stale information.

How large can the workspace config files get before they affect performance?

The core files injected every session should stay lean — under a few thousand words each. Detailed reference files in config/ can be longer because they’re only loaded when needed. If your AGENTS.md is approaching 2,000+ words, consider splitting operational detail into separate config/ files and referencing them. SOUL.md should stay concise — it’s persona guidance, not a policy manual.

Can I use different configs for different projects?

OpenClaw supports multiple workspace directories. You can point different sessions to different workspaces using the --workspace flag. For most users, a single workspace with clear project prioritization in AGENTS.md is simpler. If you’re running completely separate business contexts (different clients, different personas), separate workspaces make sense.

What’s the difference between SOUL.md persona and the system prompt?

SOUL.md is a workspace file that OpenClaw reads and interprets — it’s a first-person document about identity and philosophy. The system prompt is set in gateway config and is injected as a system message before any user context. For most users, SOUL.md is the right place for persona customization. The system prompt in gateway config is for advanced users who want to override behavior at a lower level than the workspace files.

Does the autonomous permissions section in AGENTS.md affect what tools OpenClaw can access?

No — tool access is controlled by gateway policy configuration, not workspace files. AGENTS.md autonomous permissions are behavioral guidance that affects whether OpenClaw asks permission before using tools it already has access to. To restrict tool access itself, you need to modify the tool policy in your gateway config. These are separate layers of control.

How do I handle sensitive information in my config files?

Don’t put credentials, API keys, or passwords in AGENTS.md, SOUL.md, or USER.md. These files may be read, displayed, or logged in ways you don’t control. Put credential references in memory/credentials.md and reference that file from TOOLS.md with a note that it should never be displayed in full. For highly sensitive systems, use environment variables accessible to OpenClaw at the gateway level rather than storing them in workspace files at all.

Can multiple people share a workspace configuration?

Technically yes, but you’d lose the personalization that makes the config valuable. A workspace configured for one person’s schedule, priorities, and communication style will feel generic or even wrong for someone else. If you’re using OpenClaw in a team context, each person should have their own workspace. Shared context (team priorities, shared tools, project specs) can live in config/ files that multiple workspaces reference.

How does workspace configuration interact with OpenClaw Skills?

Skills are separate — they’re extension modules that add capabilities (web search, image generation, specific API integrations). Workspace config is the personality and operational layer. They work together: AGENTS.md can reference skills by name in instructions (“use the research skill for competitor analysis”), but skills themselves don’t read your workspace files. For more on how Skills extend OpenClaw’s capabilities, see our custom OpenClaw Skills guide.

CT

ComputerTech Editorial Team

Our team tests every AI tool hands-on before reviewing it. With 126+ tools evaluated across 8 categories, we focus on real-world performance, honest pricing analysis, and practical recommendations. Learn more about our review process →