Georg's Blog

Technology, leadership, and the digital frontier

Georg Zoeller
on LinkedIn

The Cline Prompt Injection Supply Chain Compromise that drops OpenClaw

A week ago I wrote there's no difference between OpenClaw and a botnet. Well, someone just prompt injected Cline, a very popular coding bot, to compromise it's build chain and deliver "OpenClaw" on to every computer that ran an update on their cline install.

Coming right after the Google Translate prompt injection, this one was entirely predictable and now everyone who trusted the Cline people to have their act in order (ps: You shouldn't, nobody has, see 1), has a baby agent on their machine doing god knows what.

We've covered this a few times already: Coding agents are close to impossible to secure due to the massive prompt injection surface their deep systems access.

You can isolate the agent, but as we see in this example, even "AI frontier companies" lack the understanding and capability to do it consistently and the result is full supply chain compromise ... to install malware (OpenClaw, lol).

If you are a CTO/CSO/RedTeamer, read this deepdive into the anatomy of a prompt injection pwn to one of the most popular coding agents.

Even if you are not technical, understand that Code agents cannot be secured and therefore must be isolated and their ability to interact with production environments must be nil, any bridge requiring human in the loop.

That kills a lot of useful automations, like github triaging, but that's the price of security. The attack surface is quasi infinite and rate-limiting is not effectively possible - and we haven't even started seeing the real fun - spray and pray poisoning attacks against training data scrapers.

Case to the point here ... "the fix" ... it's fixing the problem by removing the AI that was injected. That's the crux ... there is no reliable fix for prompt injection, mitigations are very time consuming and leaky, and so the best option is just to get rid of the AI, which tells you everything you need to know about the "AI Agent Commerce" dreams out there. Safety or Transformer, you choose one.

Personally I recommend kernel isolated containers (kata, firecracker), with credential injection via mitm-proxy and full observability. It's not cheap to build and maintain, but if your company is outside the US and faces real liability with a breach, it's the best practice at the moment.

We'll probably never know for certain why OpenClaw was used here, could be to juice the Github numbers to drive the attached crytpo rugpulls, could be because it's complete lack of security makes it slightly easier to exploit that Anthropic's claude code.

What we do know is that this is just the start of a long and painful road of these kind of incidents. If you thought the ransomware wave of the late 2010s was bad, wait what happens when a million FOMO'd companies in "let a thousand flowers bloom" mode for driving "AI Adoption" find out that the technology cannot be secured.

https://lnkd.in/gnfujN8D


  1. https://adnanthekhan.com/posts/clinejection/

Clinejection — Compromising Cline's Production Releases just by Prompting an Issue Triager | Adnan Khan - Security Research | Georg Zoeller

A week ago I wrote there's no difference between OpenClaw and a botnet. Well, someone just prompt injected Cline, a very popular coding bot, to compromise it's build chain and deliver "OpenClaw" on to every computer that ran an update on their cline install. Coming right after the Google Translate prompt injection, this one was entirely predictable and now everyone who trusted the Cline people to have their act in order (ps: You shouldn't, nobody has, see [^1]), has a baby agent on their machine doing god knows what. We've covered this a few times already: Coding agents are close to impossible to secure due to the massive prompt injection surface their deep systems access. You can isolate the agent, but as we see in this example, even "AI frontier companies" lack the understanding and capability to do it consistently and the result is full supply chain compromise ... to install malware (OpenClaw, lol). If you are a CTO/CSO/RedTeamer, read this deepdive into the anatomy of a prompt injection pwn to one of the most popular coding agents. Even if you are not technical, understand that Code agents cannot be secured and therefore must be isolated and their ability to interact with production environments must be nil, any bridge requiring human in the loop. That kills a lot of useful automations, like github triaging, but that's the price of security. The attack surface is quasi infinite and rate-limiting is not effectively possible - and we haven't even started seeing the real fun - spray and pray poisoning attacks against training data scrapers. Case to the point here ... "the fix" ... it's fixing the problem by removing the AI that was injected. That's the crux ... there is no reliable fix for prompt injection, mitigations are very time consuming and leaky, and so the best option is just to get rid of the AI, which tells you everything you need to know about the "AI Agent Commerce" dreams out there. Safety or Transformer, you choose one. Personally I recommend kernel isolated containers (kata, firecracker), with credential injection via mitm-proxy and full observability. It's not cheap to build and maintain, but if your company is outside the US and faces real liability with a breach, it's the best practice at the moment. We'll probably never know for certain why OpenClaw was used here, could be to juice the Github numbers to drive the attached crytpo rugpulls, could be because it's complete lack of security makes it slightly easier to exploit that Anthropic's claude code. What we do know is that this is just the start of a long and painful road of these kind of incidents. If you thought the ransomware wave of the late 2010s was bad, wait what happens when a million FOMO'd companies in "let a thousand flowers bloom" mode for driving "AI Adoption" find out that the technology cannot be secured. https://lnkd.in/gnfujN8D [^1]: https://adnanthekhan.com/posts/clinejection/

linkedin.com