“Disregard that!” attacks
A worthwhile to read essay diving deeper into prompt injection, using easy to understand examples exposing the seer breadth of the attack surface and hopefully convincing the reader that there's no "fix" for this problem.
If your LLM takes in JSON responses from untrusted APIs you are at risk. If your LLM searches Google to find background information from untrusted sources, you are at risk. If your LLM scans the office network file share (which anyone can put stuff into!) you are at risk.The problem isn't actually untrusted users, the problem is untrusted material - of any kind.