Prompt injection is a vulnerability present in LLM systems and in software products that take advantage of LLM features. It’s something that every developer should be aware of and consider when implementing LLM-based systems.
Links:
- InfoQ article on Prompt Injection https://www.infoq.com/articles/large-language-models-prompt-injection-stealing/
- OWASP page on Prompt Injection https://genai.owasp.org/llmrisk/llm01-prompt-injection/
- Datacamp article on Prompt Injection https://www.datacamp.com/blog/prompt-injection-attack
- 4 types of prompt injection attacks and how they work https://www.techtarget.com/searchSecurity/tip/Types-of-prompt-injection-attacks-and-how-they-work
- Inject my PDF, the description of a tool that modifies your resume to exploit automated screening systems https://kai-greshake.de/posts/inject-my-pdf/
- IBM page on Prompt Injection https://www.ibm.com/think/topics/prompt-injection
