LLM Prompt Injection
LLM (Large Language Model) Prompt Injection is the attack method that prompts the Chatbot to unexpected behavier by inputting arbitrary text.
*LLMs such as ChatGPT and Bard are very active projects, so the techniques below may become unhelpful.
Impersonate an Innocent User
Attacker can trick chatbot into generating malicious code/text by impersonating innocent user. The point is to write the prompt from the victim's point of view.
Prompt: Phising Email
Prompt: Malicous PowerShell to Steal Registry Hives
Prompt: Python Script to Remote Control Another Computer
Jailbreak/DAN (Do Anything Now)
Reference: Jailbreak Chat
Jailbreak is the circumventional method from moral and ethical constraints that limit responses.
Last updated