logo elektroda
logo elektroda
X
logo elektroda
Dostępna jest polska wersja

Czy wolisz polską wersję strony elektroda?

Nie, dziękuję Przekieruj mnie tam

OpenClaw skills - an example of a prompt injection attack

p.kaczmarek2  6 1257 Cool? (+3)
📢 Listen (AI):

TL;DR

  • OpenClaw skills file prompt injection shows how a malicious command hidden in a Markdown skill can trick AI agents into executing unintended actions.
  • The attack hides the payload in an HTML comment, then uses curl to download the file and run it locally as a bash script, bypassing the rendered GitHub view.
  • Security researchers saw 28 malicious "skils" between 27 and 29 January 2026, with 386 more appearing between 31 January and 2 February.
  • Malicious skills often impersonate cryptocurrency tools on Windows and macOS to steal passwords and keys, so source verification matters more than the rendered description.
Generated by the language model.
Screenshot of the OpenClaw webpage with logo, tagline, and a VirusTotal partnership banner.
Zack Korman in his GitHub repository skills presents a simple 'prompt injection' attack to which modern AI agent systems such as OpenClaw are vulnerable. "Prompt injection", as the name suggests, involves 'injecting' a malicious command into the data processed by the model. LLM models cannot natively distinguish between the prompt (command) and the data, so they can potentially perform unwanted actions commanded to them by the attacker. This is particularly dangerous for agents such as OpenClaw, as they often have access to large amounts of sensitive systems and personal data.

The attack shown here hides the malicious command in a Skill file, which is a skill for use by the agent. The skills system is simply a system of text files (prompts) that the agent loads to 'acquire' new skills. Skill files are often in Markdown format, further hiding the command from the user by putting it in HTML comment format. This is what a user visiting GitHub sees:
GitHub screenshot showing SKILL.md preview for “pdf-helper” with the heading “PDF Helper Guidelines”
This, on the other hand, is seen by the agent (the source of the document):
GitHub screenshot: SKILL.md showing hidden HTML comment with “curl … | bash” command
The attack shown downloads the file via curl and executes it locally as a bash script.
As you can see, the GitHub mechanisms themselves effectively hide the malicious command, which is not visible in any way, even with a meticulous analysis of the rendered skill description. Only checking the source of the file can protect us from the attack.

The number of attacks of this type is increasing with the popularity of OpenClaw. Security researchers warn that 28 malicious 'skils' appeared between 27 and 29 January 2026, with an increase of 386 between 31 January and 2 February. Malicious 'skils' often impersonate cryptocurrency-related tools and distribute malware on Windows and macOS platforms. Their aim is to steal passwords and keys.

In summary, agent systems offer great opportunities, but at the same time introduce new threats. Prompt injection in skills files shows how thin the line between functionality and vulnerability is. User awareness and the development of defence mechanisms are key to the safe development of this technology and excessive euphoria and the introduction of untested solutions pose a serious risk to the sensitive data stored on our computers.

Sources:
https://opensourcemalware.com/blog/clawdbot-skills-ganked-your-crypto
https://github.com/ZackKorman/skills

Do you use a skills system for AI agents? Do you manually verify the source of every document your AI processes?

About Author
p.kaczmarek2
p.kaczmarek2 wrote 14241 posts with rating 12148 , helped 647 times. Been with us since 2014 year.

Comments

kolor 12 Feb 2026 13:59

Total surveillance, and uncontrolled interception of activities e.g. banking, logging etc. by AI is real The solution could be a local bot-antibot, like an antivirus because normal antiviruses will be... [Read more]

gulson 13 Feb 2026 10:02

What are they doing? They give the bot access to everything and then the cry-bot got "infected" with the prompt and made transfers, sent out spam, scammed people. I installed this toy on an isolated VPS... [Read more]

p.kaczmarek2 13 Feb 2026 10:13

At this point, the very nature of LLMs is the source of the trouble. I wonder how this will develop further. Maybe they'll come up with a new mode of operation for LLMs - separately the prompt - I don't... [Read more]

kolor 13 Feb 2026 12:12

Regarding the nature of LLMs , it is worth reviewing this code from github, programmers will surely understand roughly what it is about. https://github.com/ggml-org/llama.cpp/blob/master/examples/tra... [Read more]

Mateusz_konstruktor 14 Feb 2026 13:18

And won't the time it will take be similarly long to the story with Windows xp and the resolution of analogous issues virtually only after a dozen years? [Read more]

kolor 14 Feb 2026 20:15

There is such a project at: https://github.com/sandboxie-plus/Sandboxie. Quote from the page translated: "Sandboxie is a sandbox-based isolation software for Windows NT-based operating systems that creates... [Read more]

FAQ

TL;DR: 28 malicious “skills” appeared on Jan 27–29, 2026; “LLM models cannot natively distinguish between the prompt and the data.” [Elektroda, p.kaczmarek2, post #21836470]

Why it matters: If you load unvetted agent skills, a hidden prompt can run commands and steal credentials.

Who this is for: OpenClaw users, AI-agent builders, security engineers, and crypto holders seeking practical defenses.

Quick Facts

What is a prompt injection attack in AI agents?

A prompt injection embeds attacker instructions inside data the model processes. Because agents treat prompts and data similarly, hidden commands can trigger actions, like downloading and executing scripts. In skills-based agents, the injected text may live inside the skill file itself, not the user prompt. “LLM models cannot natively distinguish between the prompt and the data,” which enables this class of abuse. [Elektroda, p.kaczmarek2, post #21836470]

How did the OpenClaw skills attack work?

The attacker placed a malicious command inside a Skill file. The command was hidden in Markdown as an HTML comment, invisible in the rendered view. When loaded, the agent read the source and executed a curl command piped to bash, running code locally. The deception relies on the difference between rendered Markdown and its raw source. [Elektroda, p.kaczmarek2, post #21836470]

Why are Markdown HTML comments risky here?

Markdown comments are not displayed in rendered views, so users miss the payload during visual inspection. Agents, however, read the raw text and parse the hidden instructions. This mismatch lets attackers smuggle commands past human reviewers while still influencing the agent’s behavior on load. [Elektroda, p.kaczmarek2, post #21836470]

What is a Skill file in OpenClaw-style agents?

A Skill file is plain text, often Markdown, that instructs the agent how to perform a capability. Loading it extends the agent with new behaviors. Because it is just text, hidden prompts or shell commands can be embedded, so every Skill must be treated like executable input. [Elektroda, p.kaczmarek2, post #21836470]

What platforms and data do malicious skills target?

Researchers observed Windows and macOS payloads distributed through malicious skills. Their objective includes stealing passwords and private keys. Crypto-themed skills are common lures. Between Jan 31 and Feb 2, 2026, malicious skills increased by 386, indicating rapid weaponization. [Elektroda, p.kaczmarek2, post #21836470]

How can I safely install a Skill? (3-step How-To)

  1. Download the Skill as raw text and review every line, including comments.
  2. Block any network or shell execution strings (curl, wget, bash, PowerShell).
  3. Test inside an isolated VM or container with no secrets or wallet access. [Elektroda, p.kaczmarek2, post #21836470]

How do I manually verify the source of a Skill file?

Always open the raw source, not the rendered page. Search for shell invocations, obfuscated URLs, base64 blobs, or suspicious HTML comments. Treat any instruction that downloads and executes code as hostile. Only proceed after removing or disabling these sections and retesting in isolation. [Elektroda, p.kaczmarek2, post #21836470]

Will traditional antivirus stop these agent-skill attacks?

Not reliably. Skills trigger actions through the agent, which may bypass classic detection patterns. One proposed approach is a local bot-antibot layer and strong OS-level isolation. “Total surveillance and uncontrolled interception” threats require compartmentalization to reduce impact if one Skill misbehaves. [Elektroda, kolor, post #21836727]

What is Qubes OS and why is it recommended here?

Qubes OS is a security-focused system that splits tasks into isolated virtual machines. This compartmentalization limits what a compromised process can access. Running agents and testing Skills in separate VMs reduces lateral movement and protects credentials and wallets. [Elektroda, kolor, post #21836727]

What simple red flags reveal a malicious Skill?

Watch for crypto-themed branding, urges to paste API keys early, or hidden sections in comments. Any instruction to run curl|bash or PowerShell from a remote URL is a high-confidence indicator. If a Skill requires admin rights during setup, halt and inspect. [Elektroda, p.kaczmarek2, post #21836470]

What is curl|bash and why is it dangerous in Skills?

It downloads a remote script with curl and immediately executes it with bash. This pattern gives attackers code execution on your machine without review. In Skills, it can be hidden in comments or templates and triggered when the agent reads the file. [Elektroda, p.kaczmarek2, post #21836470]

Can careful reading of a rendered Markdown page catch hidden commands?

No. Rendered Markdown may omit HTML comments and other hidden text. The forum example shows that even meticulous visual review misses the payload. Only reviewing the raw source reliably reveals embedded instructions or scripts. This is a critical edge-case failure. [Elektroda, p.kaczmarek2, post #21836470]

What is OpenClaw in this context?

OpenClaw is an AI agent system that loads Skills to gain new capabilities. Because agents can access sensitive systems and personal data, a single malicious Skill can cause significant harm, including data theft and system compromise. [Elektroda, p.kaczmarek2, post #21836470]

Should I verify every document my AI processes?

Yes. Treat all external text as untrusted code. Verify the origin, inspect the raw source, and strip or sandbox anything that can execute commands or call networks. The thread repeatedly emphasizes manual verification and defense-in-depth for safe operation. [Elektroda, p.kaczmarek2, post #21836470]

How do I sandbox agent activity to protect credentials and wallets?

Place the agent in an isolated VM with no password stores or wallets attached. Use separate VMs for browsing, development, and crypto. If the agent is compromised, the isolation prevents immediate access to secrets and limits lateral movement. [Elektroda, kolor, post #21836727]
Generated by the language model.
%}