Zack Korman in his GitHub repository skills presents a simple 'prompt injection' attack to which modern AI agent systems such as OpenClaw are vulnerable. "Prompt injection", as the name suggests, involves 'injecting' a malicious command into the data processed by the model. LLM models cannot natively distinguish between the prompt (command) and the data, so they can potentially perform unwanted actions commanded to them by the attacker. This is particularly dangerous for agents such as OpenClaw, as they often have access to large amounts of sensitive systems and personal data.
The attack shown here hides the malicious command in a Skill file, which is a skill for use by the agent. The skills system is simply a system of text files (prompts) that the agent loads to 'acquire' new skills. Skill files are often in Markdown format, further hiding the command from the user by putting it in HTML comment format. This is what a user visiting GitHub sees:
This, on the other hand, is seen by the agent (the source of the document):
The attack shown downloads the file via curl and executes it locally as a bash script.
As you can see, the GitHub mechanisms themselves effectively hide the malicious command, which is not visible in any way, even with a meticulous analysis of the rendered skill description. Only checking the source of the file can protect us from the attack.
The number of attacks of this type is increasing with the popularity of OpenClaw. Security researchers warn that 28 malicious 'skils' appeared between 27 and 29 January 2026, with an increase of 386 between 31 January and 2 February. Malicious 'skils' often impersonate cryptocurrency-related tools and distribute malware on Windows and macOS platforms. Their aim is to steal passwords and keys.
In summary, agent systems offer great opportunities, but at the same time introduce new threats. Prompt injection in skills files shows how thin the line between functionality and vulnerability is. User awareness and the development of defence mechanisms are key to the safe development of this technology and excessive euphoria and the introduction of untested solutions pose a serious risk to the sensitive data stored on our computers.
Sources:
https://opensourcemalware.com/blog/clawdbot-skills-ganked-your-crypto
https://github.com/ZackKorman/skills
Do you use a skills system for AI agents? Do you manually verify the source of every document your AI processes?
Cool? Ranking DIY Helpful post? Buy me a coffee.