OpenClaw Security Auditor
Audit OpenClaw configuration for security risks and generate a remediation report using the user's configured LLM.
MIT-0 · Free to use, modify, and redistribute. No attribution required.
⭐ 1 · 2.1k · 13 current installs · 14 all-time installs
byMuhammad Waleed@Muhammad-Waleed381
MIT-0
Security Scan
OpenClaw
Benign
medium confidencePurpose & Capability
The name/description claim a local OpenClaw configuration audit. The declared requirements (cat, jq) and the instructions (read ~/.openclaw/openclaw.json, run checks, produce a report) are proportional and expected for that purpose.
Instruction Scope
The SKILL.md confines activity to reading a single config file, extracting metadata, and sending a redacted findings object to the user's configured LLM through the OpenClaw agent flow. This is coherent, but the SKILL.md does not show the exact redaction commands or jq filters used—so you must trust the skill to actually remove secrets before sending. Also, 'user's configured LLM' may be a remote service (e.g., OpenAI); validate whether findings (even metadata) are acceptable to send to that endpoint.
Install Mechanism
No install spec or code files are present (instruction-only). That minimizes disk persistence and attack surface; requirements are limited to common CLI tools (cat, jq).
Credentials
The skill requests no environment variables or credentials, which is appropriate for a local config-only auditor. However, the SKILL.md's promise to 'strip all secrets' is a behavioural assertion not enforced by declared requirements—verify redaction behavior before sending data to any remote model.
Persistence & Privilege
always is false and there is no install performing background persistence. The skill invokes the OpenClaw agent to analyze findings (normal). It does not request system-wide config changes or other skills' credentials.
Assessment
This skill appears coherent for auditing OpenClaw configs, but take simple precautions before running it on production data: 1) Inspect the SKILL.md and any jq/redaction examples (or run it against a copy of your config with secrets replaced) to confirm secrets are removed. 2) If your OpenClaw LLM is a remote cloud provider, consider whether metadata about misconfigurations is acceptable to transmit — run the audit locally against a sanitized copy first. 3) Test on a non-production or redacted config to verify output and redaction behavior. 4) If you need stronger guarantees, request or supply explicit redaction filters (so the skill never transmits token values) or use a local-only LLM before running against sensitive configs.Like a lobster shell, security has layers — review code before you run it.
Current versionv1.0.0
Download ziplatest
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
Runtime requirements
OSmacOS · Linux · Windows
Binscat, jq
SKILL.md
OpenClaw Security Audit Skill
Local-only skill that audits ~/.openclaw/openclaw.json, runs 15+ security
checks, and generates a detailed report using the user's existing LLM
configuration. No external APIs or keys required.
When to Use This Skill
- The user asks for a security audit of their OpenClaw instance.
- The user wants a remediation checklist for configuration risks.
- The user is preparing an OpenClaw deployment and wants a hardening review.
How It Works
- Read config with standard tools (
cat,jq). - Extract security-relevant settings (NEVER actual secrets).
- Build a structured findings object with metadata only.
- Pass findings to the user's LLM via OpenClaw's normal agent flow.
- Generate a markdown report with severity ratings and fixes.
Inputs
- target_config_path (optional): Path to OpenClaw config file.
- default: ~/.openclaw/openclaw.json
Outputs
- Markdown report including:
- Overall risk score (0-100)
- Findings categorized by severity (Critical/High/Medium/Low)
- Each finding with description, why it matters, how to fix, example config
- Prioritized remediation roadmap
Security Checks (15+)
- API keys hardcoded in config (vs environment variables)
- Weak or missing gateway authentication tokens
- Unsafe gateway.bind settings (0.0.0.0 without proper auth)
- Missing channel access controls (allowFrom not set)
- Unsafe tool policies (elevated tools without restrictions)
- Sandbox disabled when it should be enabled
- Missing rate limits on channels
- Secrets potentially exposed in logs
- Outdated OpenClaw version
- Insecure WhatsApp configuration
- Insecure Telegram configuration
- Insecure Discord configuration
- Missing audit logging for privileged actions
- Overly permissive file system access scopes
- Unrestricted webhook endpoints
- Insecure default admin credentials
Data Handling Rules
- Strip all secrets before analysis.
- Only report metadata such as present/missing/configured.
- Do not log or emit actual key values.
- Use local-only execution; no network calls.
Example Findings Object (Redacted)
{
"config_path": "~/.openclaw/openclaw.json",
"openclaw_version": "present",
"gateway": {
"bind": "0.0.0.0",
"auth_token": "missing"
},
"channels": {
"allowFrom": "missing",
"rate_limits": "missing"
},
"secrets": {
"hardcoded": "detected"
},
"tool_policies": {
"elevated": "unrestricted"
}
}
Report Format
The report must include:
- Overall risk score (0-100)
- Severity buckets: Critical, High, Medium, Low
- Each finding: description, why it matters, how to fix, example config
- Prioritized remediation roadmap
Skill Flow (Pseudo)
read_config_path = input.target_config_path || ~/.openclaw/openclaw.json
raw_config = cat(read_config_path)
json = jq parse raw_config
metadata = extract_security_metadata(json)
findings = build_findings(metadata)
report = openclaw.agent.analyze(findings, format=markdown)
return report
Notes
- Uses the user's existing OpenClaw LLM configuration (Opus, GPT, Gemini, and local models).
- No external APIs or special model access are required.
Files
12 totalSelect a file
Select a file to preview.
Comments
Loading comments…
