Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks Paper • 2401.17263 • Published Jan 30 • 1
Jatmo: Prompt Injection Defense by Task-Specific Finetuning Paper • 2312.17673 • Published Dec 29, 2023 • 1
Prompt Injection Attacks and Defenses in LLM-Integrated Applications Paper • 2310.12815 • Published Oct 19, 2023 • 1
Prompt Injection attack against LLM-integrated Applications Paper • 2306.05499 • Published Jun 8, 2023 • 1
Not what you've signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection Paper • 2302.12173 • Published Feb 23, 2023