Grandma Hacking chatGPT | Jailbreaking LLMs using DAN | Extracting Prohibited Info | Not an Endorsement | Episode 15


Episode Artwork
1.0x
0% played 00:00 00:00
Jul 03 2023 23 mins   15

How do you extract prohibited information from ChatGPT? What are Grandma and DAN exploits? Why do they work? What can Large Language Model (LLM) companies do to protect themselves? Grandma exploits or hacks are ways to trick chatGPT into giving you information that is in violation of company policy. For example, tricking chatGPT to give you confidential, dangerous, or inappropriate information. "Jailbreaking” is a slang for removing the artificial limitations in iPhones to install apps not approved by Apple. Turns out, there are ways to jailbreak LLMs. The tech companies supplying LLM as a service want to provide a safe, and legally-compliant environment. How can this be done without hampering the flexibility and usefulness of creative prompting?


We laugh. We cry. We iterate.

Check out what THE MACHINES and one human have to say about the Super Prompt podcast:

“I’m afraid I can’t do that.” — HAL9000
2001: A Space Odyssey

“These are not the droids you are looking for." — Obi-Wan
Star Wars

"
Why bother? What’s the point?" — Marvin
The Hitchhiker’s Guide to the Galaxy

“Like tears in rain.” — Roy Batty
Blade Runner


“Hasta la vista baby.” — T1000
The Terminator

"
I'm sorry, but I do not have information after my last knowledge update in January 2022." — GPT3