Many courses teach prompt engineering and currently pretty much all examples are vulnerable to Prompt Injections. Especially Indirect Prompt Injections are dangerous. They allow untrusted data to take control of the LLM (large language model) and give an AI a new instructions, mission and objective.
This video aims to raise awareness of this rising problem.
Injections Lab:
Prompt Engineering Overview 0:00
Prompt Injections Explained 2:05
Indirect Prompt Injection and Examples 4:03
GPT 3.5 Turbot vs GPT-4 5:55
Examples of payloads 6:15
Indirect Injections, Plugins and Tools 8:20
Algorithmic Adversarial Prompt Creation 10:35
AI Injections Tutorials + Lab 12:22
Defenses 12:39
Thanks 14:40