C
Cybersecurity
InfoSec, hacking, and security news
Accidental LLM Backdoor - Prompt Tricks
In this video we explore various prompt tricks to manipulate the AI to respond in ways we want, even when the system instructions want something else. This can help us better understand the limitations of LLMs. Get my font (advertisement): https://shop.liveoverflow.com Watch the complete AI series...