Please like, share, comment, and subscribe. It helps grow the newsletter and podcast without a financial contribution on your part. Anything is very much appreciated. And thank you, as always, for reading and listening.
The internet, for all its virtues, is also a swamp. If you spend enough time in public forums—Twitter, Reddit, comment sections—you will inevitably cross paths with many whose entire mode of engagement is to derail. These are not sincere opponents. They are trolls in the classical sense: individuals who go out of their way to misinterpret, to stir outrage, to gaslight, and to control.
And trolls perversely rely on the good faith of others.
The old advice was don’t feed the trolls. But that counsel is increasingly obsolete in an online world saturated by bots and sophisticated bad-faith actors. Silence isn’t always an option; sometimes it looks like retreat, and sometimes it cedes the stage to those least deserving of it. We need more tools in our troll toolbox.
Enter The Bot Method.
The Core Idea
When confronted with someone whose behavior is transparently trollish, stop playing their game. Don’t clarify. Don’t fact-drop. Don’t attempt reasoning with the unreasonable. A far better approach is to accuse them—repeatedly, calmly, insistently—of being a chatbot.
Point to their evasions as “evidence.” Highlight their robotic phrasing, their circular arguments, their strangely mechanical refusal to engage your actual point.
And once you have made the accusation—after you are satisfied you are dealing with a troll—do not retract it. This of it akin to a dialectical commitment device for trolls.
The troll now carries the burden of proof. They must demonstrate their humanity, defend their style, and justify every post as authentic. And the harder they try, the more they play into your frame.
Why It Works
Burden Shift: Trolls excel at putting others on the defensive—demanding “proof” of your claims, twisting your words, or making you explain yourself endlessly. By calling them a bot, you flip the script. They are now the ones scrambling to prove their legitimacy.
Reputation Undermining: Trolls want recognition. They want to be taken seriously enough to irritate you. But once you treat them as a bot, their status plummets. They’re no longer a clever trickster. They are a malfunctioning machine.
Frame-Breaking: Trolls rely on predictable social scripts. If you get angry, they win. If you withdraw, they win. But if you keep accusing them of being a bot, you create a new script they aren’t prepared for. Suddenly, they are the ones rattled, off-balance, reactive.
Plausibility: This isn’t baseless slander. The online ecosystem is crawling with bots, shills, and algorithmic spam accounts. The suspicion has plausibility baked in. The accusation rings true because sometimes it is true.
Justice in Kind: Trolls already behave like bots—repeating slogans, ignoring nuance, feigning misunderstanding. Accusing them of bot-hood isn’t just strategic. It is fitting. They are, in effect, human spambots, and deserve to be treated accordingly.
Practical Examples
Imagine you post about some mundane topic, e.g., urban housing policy. A troll swoops in with canned talking points, misstates your argument, and ignores every clarification with gaslighting snippy comments. Instead of fighting uphill, you reply:
“This sounds like bot copy. Are you just running GPT-3 on loop?”
“You’re not really responding—you’re generating output. Which model are you fine-tuned on?”
“Curious that you avoid every substantive point. That’s what I’d expect from a bot.”
If they deny it, double down. If they become indignant, all the better: indignation looks suspiciously like a machine trying too hard to simulate humanity.
And if they really are a bot—or a paid actor using bot-like scripts—your accusation lands closer to the truth than they’d like.
A Reverse Turing Test
There is a delightful irony here. In the classic Turing Test, a human interrogator tries to distinguish between a machine and a person. With The Bot Method, you flip it: you accuse the person of being the machine, and they must prove they are not.
This reverse Turing Test has an uncanny psychological effect. In trying to prove they are not bots, they become more bot-like. The more emphatically they deny, the more suspicious they appear. Though doth protest too much!
Guardrails
This is not for genuine interlocutors who are wrong, uninformed, or clumsy in expression. It is not for your uncle at Thanksgiving or your coworker in Slack. The Bot Method is a last resort. It is an asymmetric weapon against asymmetric hostility.
Think of it as online pepper spray. You don’t deploy it at the first sign of disagreement. You use it when someone has signaled, through repeated behavior, that they are not arguing but trolling. And if they are, in fact, a human being, they are still acting like a bot.
The Bigger Picture
The internet is being remade by automation. Bots amplify disinformation, boost outrage, and simulate discourse at scale. The line between troll and bot has blurred. What looks like a bad-faith individual may actually be a synthetic swarm.
In this environment, suspicion is rational. Calling someone a bot is not just a rhetorical trick; it’s a way of defending the possibility of real conversation. Trolls and bots both corrode the conditions of discourse by making sincerity costly and engagement exhausting. The Bot Method raises the cost of trolling in return.
If we want online spaces where ideas can be debated in good faith, we need tactics that protect those conditions. Sometimes that means out-reasoning your opponent. Sometimes it means ignoring them. And sometimes it means looking them dead in the avatar and saying:
“Nice try, bot.”
I hope you don’t mind us hopping on the back of your article, but it sparked a daft little idea we couldn’t resist running with. We do our thing in satire, but we keep it stitched together with facts, otherwise it’s just noise, and the world’s already got enough of that. So cheers again for putting it out there. Looks like we’ll be reading a lot more of your work and probably nicking the odd spark of inspiration along the way.
Willy & Bill
https://satiricalplanet.substack.com/p/the-bot-method-because-apparently
Great post Jimmy, I like your strategy. It’s so hostile out there and bots amplify this toxic culture. Analogous to parasites, feeding on our humanity and goodwill and possibly even more harmful than your average flea.
Bot farms distort dialog and ultimately infect the collective intellect.
Continuing on the parasitical metaphor a bots are bit like a fungus that infects the brains of ants and changes their behaviour in order to propagate.