FAQ
TL;DR: 28 malicious “skills” appeared on Jan 27–29, 2026; “LLM models cannot natively distinguish between the prompt and the data.” [Elektroda, p.kaczmarek2, post #21836470]
Why it matters: If you load unvetted agent skills, a hidden prompt can run commands and steal credentials.
Who this is for: OpenClaw users, AI-agent builders, security engineers, and crypto holders seeking practical defenses.
Quick Facts
- Malicious skills spiked by +386 between Jan 31 and Feb 2, 2026, targeting Windows and macOS for credential theft. [Elektroda, p.kaczmarek2, post #21836470]
- Attackers hide commands in Markdown HTML comments; the rendered page looks safe, but the source runs curl|bash. [Elektroda, p.kaczmarek2, post #21836470]
- Only viewing the raw source reliably reveals the injected command; rendered views can conceal it. [Elektroda, p.kaczmarek2, post #21836470]
- Isolation-first workflow: compartmentalize apps into separate VMs to limit blast radius (e.g., Qubes OS model). [Elektroda, kolor, post #21836727]
- Typical goal of these skills: exfiltrate passwords and crypto keys via downloaded scripts executed locally. [Elektroda, p.kaczmarek2, post #21836470]
What is a prompt injection attack in AI agents?
A prompt injection embeds attacker instructions inside data the model processes. Because agents treat prompts and data similarly, hidden commands can trigger actions, like downloading and executing scripts. In skills-based agents, the injected text may live inside the skill file itself, not the user prompt. “LLM models cannot natively distinguish between the prompt and the data,” which enables this class of abuse. [Elektroda, p.kaczmarek2, post #21836470]
How did the OpenClaw skills attack work?
The attacker placed a malicious command inside a Skill file. The command was hidden in Markdown as an HTML comment, invisible in the rendered view. When loaded, the agent read the source and executed a curl command piped to bash, running code locally. The deception relies on the difference between rendered Markdown and its raw source. [Elektroda, p.kaczmarek2, post #21836470]
Why are Markdown HTML comments risky here?
Markdown comments are not displayed in rendered views, so users miss the payload during visual inspection. Agents, however, read the raw text and parse the hidden instructions. This mismatch lets attackers smuggle commands past human reviewers while still influencing the agent’s behavior on load. [Elektroda, p.kaczmarek2, post #21836470]
What is a Skill file in OpenClaw-style agents?
A Skill file is plain text, often Markdown, that instructs the agent how to perform a capability. Loading it extends the agent with new behaviors. Because it is just text, hidden prompts or shell commands can be embedded, so every Skill must be treated like executable input. [Elektroda, p.kaczmarek2, post #21836470]
What platforms and data do malicious skills target?
Researchers observed Windows and macOS payloads distributed through malicious skills. Their objective includes stealing passwords and private keys. Crypto-themed skills are common lures. Between Jan 31 and Feb 2, 2026, malicious skills increased by 386, indicating rapid weaponization. [Elektroda, p.kaczmarek2, post #21836470]
How can I safely install a Skill? (3-step How-To)
- Download the Skill as raw text and review every line, including comments.
- Block any network or shell execution strings (curl, wget, bash, PowerShell).
- Test inside an isolated VM or container with no secrets or wallet access. [Elektroda, p.kaczmarek2, post #21836470]
How do I manually verify the source of a Skill file?
Always open the raw source, not the rendered page. Search for shell invocations, obfuscated URLs, base64 blobs, or suspicious HTML comments. Treat any instruction that downloads and executes code as hostile. Only proceed after removing or disabling these sections and retesting in isolation. [Elektroda, p.kaczmarek2, post #21836470]
Will traditional antivirus stop these agent-skill attacks?
Not reliably. Skills trigger actions through the agent, which may bypass classic detection patterns. One proposed approach is a local bot-antibot layer and strong OS-level isolation. “Total surveillance and uncontrolled interception” threats require compartmentalization to reduce impact if one Skill misbehaves. [Elektroda, kolor, post #21836727]
What is Qubes OS and why is it recommended here?
Qubes OS is a security-focused system that splits tasks into isolated virtual machines. This compartmentalization limits what a compromised process can access. Running agents and testing Skills in separate VMs reduces lateral movement and protects credentials and wallets. [Elektroda, kolor, post #21836727]
What simple red flags reveal a malicious Skill?
Watch for crypto-themed branding, urges to paste API keys early, or hidden sections in comments. Any instruction to run curl|bash or PowerShell from a remote URL is a high-confidence indicator. If a Skill requires admin rights during setup, halt and inspect. [Elektroda, p.kaczmarek2, post #21836470]
What is curl|bash and why is it dangerous in Skills?
It downloads a remote script with curl and immediately executes it with bash. This pattern gives attackers code execution on your machine without review. In Skills, it can be hidden in comments or templates and triggered when the agent reads the file. [Elektroda, p.kaczmarek2, post #21836470]
Can careful reading of a rendered Markdown page catch hidden commands?
No. Rendered Markdown may omit HTML comments and other hidden text. The forum example shows that even meticulous visual review misses the payload. Only reviewing the raw source reliably reveals embedded instructions or scripts. This is a critical edge-case failure. [Elektroda, p.kaczmarek2, post #21836470]
What is OpenClaw in this context?
OpenClaw is an AI agent system that loads Skills to gain new capabilities. Because agents can access sensitive systems and personal data, a single malicious Skill can cause significant harm, including data theft and system compromise. [Elektroda, p.kaczmarek2, post #21836470]
Should I verify every document my AI processes?
Yes. Treat all external text as untrusted code. Verify the origin, inspect the raw source, and strip or sandbox anything that can execute commands or call networks. The thread repeatedly emphasizes manual verification and defense-in-depth for safe operation. [Elektroda, p.kaczmarek2, post #21836470]
How do I sandbox agent activity to protect credentials and wallets?
Place the agent in an isolated VM with no password stores or wallets attached. Use separate VMs for browsing, development, and crypto. If the agent is compromised, the isolation prevents immediate access to secrets and limits lateral movement. [Elektroda, kolor, post #21836727]
Comments
Total surveillance, and uncontrolled interception of activities e.g. banking, logging etc. by AI is real The solution could be a local bot-antibot, like an antivirus because normal antiviruses will be... [Read more]
What are they doing? They give the bot access to everything and then the cry-bot got "infected" with the prompt and made transfers, sent out spam, scammed people. I installed this toy on an isolated VPS... [Read more]
At this point, the very nature of LLMs is the source of the trouble. I wonder how this will develop further. Maybe they'll come up with a new mode of operation for LLMs - separately the prompt - I don't... [Read more]
Regarding the nature of LLMs , it is worth reviewing this code from github, programmers will surely understand roughly what it is about. https://github.com/ggml-org/llama.cpp/blob/master/examples/tra... [Read more]