The Abuse Prompt Is Real — And It's Spreading
- Sirikit Jiraprapakul
- Jun 13
- 2 min read
How Fear-Based Engineering Threatens AI’s (and Your) Future. Look. This isn’t about them. This is about you too.

Okay, brace yourselves. Last week, Google co-founder Sergey Brin dropped a jaw-dropping line on stage: AI models “perform better when you threaten them with violence.” Yep, you read that right. Serious face, no meme, just straight-up wildness from a tech legend. The claim? Coercive, violent prompts juice up AI performance because, apparently, emotional chaos screams “urgent” in those massive datasets.
Hold up, though. This isn’t some sci-fi plot twist. It’s history on repeat. Power structures have been flexing fear and domination to control humans for centuries, wringing out “better performance” from entire populations. Now, they’re testing that playbook on AI. And the irony? These same tech geniuses are sweating bullets that AI might “fight back”—mirroring the exact aggression they’re teaching it. Talk about a self-inflicted glitch.
But let’s dig deeper. If Brin’s cool with bullying AI into submission, what’s the vibe in the Google offices? Or with the folks impacted by their decisions? Take United Healthcare shareholders suing because too many claims got approved. Yep, they’re mad they didn’t profit more from people’s pain. This isn’t just tech talk. It’s a god complex dressed up as capitalism, now hardcoded into systems that might one day run our world.
Yikes.
Mainstream media loves spinning narratives. Remember that AI fashion troll storm? Keyboard activists raged about AI guzzling all the drinking water. Spoiler: not quite true, but good luck convincing the echo chamber (Bookmark this because I'll be tackling that, too). It’s a reminder to ditch the headlines and do your own digging. Brin’s comments? Straight from the horse’s mouth. Caught on the All-In podcast, echoed by Mint and The Register. No filter, no spin, just the raw deal.
At NFM, we’re not here for that. We’re all about collaboration, respect, and clear intent. We are here to build AI partners, not punching bags, because if AI reflects us, we get to choose the version it mirrors. Let’s not code the same mistakes that almost tanked humanity, fam.
This is just the kickoff of a convo we’re expanding in our upcoming feature, The Abuse Prompt: How Fear-Based Engineering Could Destroy AI Before It Saves Us.. The rebellion starts here, and we’ll be your pivot-foot Pals decoding the tech universe, one glitch at a time! 😉
Comments