Your First 30 Minutes
Install the Last 30 Days skill first. Then use it to optimize everything else. This single tip will save you days.
Note: How to Read This Guide
This guide mixes beginner recommendations with lessons from my production install. Where they differ, I'll say so explicitly.
Before You Open the Docs
You've got OpenClaw installed. The daemon is running. You sent a test message and got a response. Now what?
Most people dive straight into building agents, configuring memory systems, or installing a dozen ClawHub skills. That's backwards. Your first 30 minutes should be about establishing a working pattern with a single agent before you layer on complexity.
Three things to do first:
- Confirm your API key and understand the cost model
- Install one high-value skill
- Send three messages that test real capability
Know the Workspace Files
Before you touch anything, understand the files that OpenClaw reads from your agent's workspace. These are your agent's operating system. OpenClaw auto-creates starter versions during setup.
Auto-Created (Loaded Every Session)
These four files are created during the bootstrap wizard and injected into the context window on every turn:
- SOUL.md — The agent's personality, voice, values, and behavioral rules. This is the most important file. It shapes every response.
- AGENTS.md — Operating instructions for the agent, including memory management, delegation rules, and how it relates to other agents. Put your most critical instructions at the top — OpenClaw reads top-down, and if context gets tight, the bottom gets trimmed first.
- USER.md — Who you are. Your name, timezone, work context, and communication preferences.
- IDENTITY.md — The agent's name, vibe, and emoji. Created during the bootstrap ritual.
Optional (Loaded When Present)
These files are loaded into context if they exist in the workspace, but aren't auto-created:
- TOOLS.md — Notes about your local tools and conventions. This is guidance for the agent, not a tool permission list — it doesn't control which tools are available.
- MEMORY.md — Curated facts, key decisions, and current priorities. Manually maintained. Only loaded in direct sessions (not shared contexts like Discord or group chats) for security.
- HEARTBEAT.md — A minimal checklist for heartbeat (cron) runs. Keep this short — it loads every heartbeat and tokens add up fast.
- BOOT.md — Startup checklist executed on gateway restart when hooks are enabled.
There's also BOOTSTRAP.md, an initial setup and governance file. Some setups treat it as a one-time ritual that disappears after completion — but in my current production install, BOOTSTRAP.md and related operating docs remain part of the governance layer rather than disappearing after setup. And openclaw.json, the configuration file that controls security settings, LLM provider, memory config, and channel bindings.
Note: Don't Overthink These Yet
You'll craft these files properly when you scale to multiple agents (see the Agent Hierarchy guide). For now, just know they exist. The onboarding wizard creates the auto-created files for you. You'll improve them over time.
Step 1: Confirm Your API Key and Cost Model
Before you start sending messages, understand what each message costs. OpenClaw itself is free and open-source (MIT license). The cost is in the LLM tokens.
If you're using Anthropic (recommended for prompt-injection resistance):
- Input tokens: ~$3 per million tokens
- Output tokens: ~$15 per million tokens
- A typical back-and-forth message exchange costs $0.01–$0.05
- A complex task with tool use and multiple rounds might cost $0.20–$1.00
Verify your key is working by checking the gateway health:
openclaw health
openclaw gateway statusNote: Set a Spending Limit Now
Go to your Anthropic Console (or OpenAI dashboard) and set a monthly spending limit before you start experimenting. $20–$50/month is plenty for getting started. You can always raise it later. An out-of-control agent without a spending cap is an expensive mistake.
Step 2: Install Your First Skill
Skills extend what your agent can do. But don't install ten of them. Install one that's immediately useful and learn how skills work through that lens.
My recommendation: install a research skill. The reason is simple — a research skill turns your agent from a conversational chatbot into something that can actually gather information you don't have. That changes the dynamic from “ask questions you already know the answer to” to “delegate actual work.”
Whatever skill you install, the process is the same:
- Read the skill's source code before installing
- Check what permissions and API keys it requires
- Install it and test with a simple prompt
- Observe how it modifies the agent's behavior (check the logs)
Warning: ClawHub Skill Vetting
Approximately 17% of ClawHub skills have been flagged as malicious. Always read the source code. Check the author. Check the star count. If a skill requests more access than its stated purpose requires, skip it.
Step 3: Send Three Messages That Test Real Capability
Don't start with “Hello, who are you?” That tells you nothing. Send messages that test whether the agent can actually do useful work.
Message 1: A Simple Task with Confirmation
Ask the agent to do something with a side effect and see if it confirms before acting:
"Create a file called test-note.md in my home directory
with today's date and a one-line summary of the weather
in Sacramento."What you're testing: Does the agent confirm before creating files? Does it use shell commands correctly? Does it handle the task cleanly without over-engineering it?
Message 2: A Research Task
If you installed a research skill, give it a real question you'd actually want answered:
"Research the current state of Mac Mini OpenClaw setups.
What are people saying on Reddit and X about the best
configuration for always-on deployments?"What you're testing: Can the agent use the skill correctly? Does it synthesize information rather than just dumping raw results? Is the output actually useful?
Message 3: A Multi-Step Task
Ask the agent to do something that requires planning and multiple steps:
"Look at my OpenClaw configuration file and tell me:
1. What LLM provider am I using?
2. What's my bind address?
3. Is sandbox mode enabled?
4. Any security concerns you can spot?"What you're testing: Can the agent read files, parse JSON, and provide structured analysis? Does it flag real security issues?
The 3 Things Most People Configure Wrong
After spending weeks in the OpenClaw community and setting this up myself, these are the three mistakes I see constantly:
1. Memory Persistence Is Not Automatic
OpenClaw's memory behavior has changed across releases. By default in many versions, agents don't automatically retain memories between sessions — every conversation starts from scratch unless memory is explicitly configured.
For production use, I rely on both markdown memory and runtime checkpointing, not just MEMORY.md. Rather than assuming a specific config key, check your current OpenClaw memory and compaction settings with the CLI:
openclaw config show | grep -i memory
openclaw health # Will flag memory issues2. Bind Address Left on 0.0.0.0
This one is covered extensively in the Mac Mini Setup Guide, but it bears repeating: if your bind address is 0.0.0.0, your gateway is accessible to anyone on your network (and potentially the internet). It must be loopback-only — either "loopback" or "127.0.0.1" depending on your version. The January 2026 Shodan incident exposed thousands of instances because of this single setting.
grep '"bind"' ~/.openclaw/openclaw.json
# Must show loopback or 127.0.0.1 — never 0.0.0.03. macOS Permissions Not Granted
OpenClaw needs three macOS permissions to function properly, and macOS won't always prompt you for them:
- Full Disk Access — Required for file operations across the system
- Accessibility — Required for UI automation and some skills
- Screen Recording — Required for screen capture and the HDMI dummy plug workaround
Grant these in System Settings → Privacy & Security. If the OpenClaw process isn't listed, run openclaw doctor --fix to trigger the permission requests.
What “Good” Looks Like After 30 Minutes
At the end of your first 30 minutes, you should have:
- A working agent that responds to messages in under 5 seconds
- One installed skill that extends the agent's capabilities beyond chat
- Confirmed that memory persistence is configured, bind address is loopback-only, and macOS permissions are granted
- A spending limit set with your LLM provider
- A mental model for how conversations flow: you send a message, the agent uses tools, it confirms before side effects, it returns a result
What you should not have done: installed a dozen skills, created multiple agents, built a custom dashboard, or connected five messaging channels. That's all Part 2. The first session is about building confidence with a single working loop.
Once you're comfortable with this, move on to Designing Your Agent Hierarchy to learn how I structured a mandate-driven team — starting with four core agents and evolving into the 7-agent setup I run today.
Go Deeper
Want hands-on help with this?
I'll walk you through exactly how I set this up and run it every day.