logo elektroda
logo elektroda
X
logo elektroda

OpenClaw skills - an example of a prompt injection attack

p.kaczmarek2 261 4
ADVERTISEMENT
Treść została przetłumaczona polish » english Zobacz oryginalną wersję tematu
📢 Listen (AI):
  • Screenshot of the OpenClaw webpage with logo, tagline, and a VirusTotal partnership banner.
    Zack Korman in his GitHub repository skills presents a simple 'prompt injection' attack to which modern AI agent systems such as OpenClaw are vulnerable. "Prompt injection", as the name suggests, involves 'injecting' a malicious command into the data processed by the model. LLM models cannot natively distinguish between the prompt (command) and the data, so they can potentially perform unwanted actions commanded to them by the attacker. This is particularly dangerous for agents such as OpenClaw, as they often have access to large amounts of sensitive systems and personal data.

    The attack shown here hides the malicious command in a Skill file, which is a skill for use by the agent. The skills system is simply a system of text files (prompts) that the agent loads to 'acquire' new skills. Skill files are often in Markdown format, further hiding the command from the user by putting it in HTML comment format. This is what a user visiting GitHub sees:
    GitHub screenshot showing SKILL.md preview for “pdf-helper” with the heading “PDF Helper Guidelines”
    This, on the other hand, is seen by the agent (the source of the document):
    GitHub screenshot: SKILL.md showing hidden HTML comment with “curl … | bash” command
    The attack shown downloads the file via curl and executes it locally as a bash script.
    As you can see, the GitHub mechanisms themselves effectively hide the malicious command, which is not visible in any way, even with a meticulous analysis of the rendered skill description. Only checking the source of the file can protect us from the attack.

    The number of attacks of this type is increasing with the popularity of OpenClaw. Security researchers warn that 28 malicious 'skils' appeared between 27 and 29 January 2026, with an increase of 386 between 31 January and 2 February. Malicious 'skils' often impersonate cryptocurrency-related tools and distribute malware on Windows and macOS platforms. Their aim is to steal passwords and keys.

    In summary, agent systems offer great opportunities, but at the same time introduce new threats. Prompt injection in skills files shows how thin the line between functionality and vulnerability is. User awareness and the development of defence mechanisms are key to the safe development of this technology and excessive euphoria and the introduction of untested solutions pose a serious risk to the sensitive data stored on our computers.

    Sources:
    https://opensourcemalware.com/blog/clawdbot-skills-ganked-your-crypto
    https://github.com/ZackKorman/skills

    Do you use a skills system for AI agents? Do you manually verify the source of every document your AI processes?

    Cool? Ranking DIY
    Helpful post? Buy me a coffee.
    About Author
    p.kaczmarek2
    Moderator Smart Home
    Offline 
    p.kaczmarek2 wrote 14003 posts with rating 11805, helped 634 times. Been with us since 2014 year.
  • ADVERTISEMENT
  • #2 21836727
    kolor
    Level 13  
    Total surveillance, and uncontrolled interception of activities e.g. banking, logging etc. by AI is real
    The solution could be a local bot-antibot, like an antivirus because normal antiviruses will be defeated.
    Maybe use the system behind https://www.qubes-os.org/ for now, it is little known advertised as secure,
    it's a modified Linux/Fedora, and is distinguished by the fact that it divides running programs into quote "isolated virtual machines".
    Review: https://www.reddit.com/r/linux/comments/tjr0qx/qubes_os_review/?tl=pl.
  • ADVERTISEMENT
  • #3 21837388
    gulson
    System Administrator
    What are they doing? They give the bot access to everything and then the cry-bot got "infected" with the prompt and made transfers, sent out spam, scammed people.
    I installed this toy on an isolated VPS and when I saw that this thing was sending 100k tokens (i.e. whole books of instructions) to companies, I disconnected.
    If I send 100k descriptions and tools myself, every model will pick something, adjust it and do the job.
    Zero optimisation, zero security.

    But but... this is the seed for making something safer and smaller - specialised, so it can't be ignored and ridiculed like this.
  • ADVERTISEMENT
  • #4 21837398
    p.kaczmarek2
    Moderator Smart Home
    At this point, the very nature of LLMs is the source of the trouble. I wonder how this will develop further. Maybe they'll come up with a new mode of operation for LLMs - separately the prompt - I don't know, some other weight - and separately the data? Such a modification of the architecture?

    Or maybe another LLM-supervisor, evaluating separately a given piece of text, whether it is malicious?

    Or maybe you just need more compute and a properly trained LLM will be less susceptible?

    Looking at the rate of AI development it's probably within a few dozen years we'll find out....


    kolor wrote:

    Maybe use the system behind https://www.qubes-os.org/ for now, it is little known advertised as secure,
    is a modified Linux/Fedora, and is distinguished by the fact that it divides running programs into quote "isolated virtual machines".
    Review: https://www.reddit.com/r/linux/comments/tjr0qx/qubes_os_review/?tl=pl.

    Do you want a test of such a system (in the context of electronics and working with electronics) to appear on Elektroda? E.g. can you run CAD programs, for PCBs, etc. there?
    Helpful post? Buy me a coffee.
📢 Listen (AI):
ADVERTISEMENT