Why Platforms Reward Extremism: When a National Guardsman’s Terror Plot Proves the Business Model
A National Guardsman allegedly planned to establish an American caliphate, manufactured 3D-printed guns, and shipped them to Al Qaeda—all while organizing on Discord and building a following that treated his escalating radicalism like content. Federal prosecutors say he wasn’t just plotting terrorism. He was performing it, documenting each step, growing an audience around increasingly extreme acts. The platform economics made perfect sense of his behavior.
This isn’t about one disturbed individual. By tying reach, payouts, and social status to raw engagement, today’s platforms create a market where influencers, scammers, and even extremists convert ever more sensational claims and stunts into visibility, status, and profit. The guardsman didn’t radicalize in a vacuum. He radicalized inside an economic system that rewards anyone willing to go further than the last person.
You need to understand why platforms reward extremism, because this pattern is everywhere now. It’s not about ideology anymore. It’s about the business model.
Why Platforms Reward Extremism: The Engagement Economy Makes It Profitable
Platforms don’t pay you for being right. They pay you for keeping people’s attention. Watch time. Comments. Shares. Replies asking “wait, is this real?” Every single one counts as engagement, and engagement converts directly into visibility, which converts into followers, which converts into money.

The guardsman’s Discord server worked exactly like a creator’s community. People showed up for the shock value. They stayed because each post went further than the last. Axios reported in March 2024 that TikTok’s creator program pays based purely on watch time—not accuracy, not quality, just whether people keep watching. AI-generated misinformation videos were pulling in serious money because confusion keeps people engaged longer than facts.
Normal content gets ignored. Extreme content gets shared. “Did you see this guy?” becomes free marketing. The algorithm doesn’t ask if you’re promoting terrorism or wellness advice. It asks if you’re holding attention.
The Money Trail
Follow the cash and you’ll see why this keeps happening. When TikTok launched its creator fund, payouts went to whoever racked up the most watch time. Health influencers figured this out fast. The Guardian found in February 2025 that wellness creators were using fear tactics to promote health tests with basically no scientific backing. They’d post videos claiming common symptoms indicated serious diseases, then link to expensive test kits in their bio.
The more scared you got, the longer you watched. And the longer you watched, the more money they made. Some of these creators pulled in six figures monthly using nothing but medical fear-mongering. The platforms knew. They kept paying.
Crypto scammers use the same playbook. Haliey Welch launched a memecoin called $HAWK that crashed 95% within hours, but not before early insiders cashed out millions. She built her following on viral moments, then converted that attention into a pump-and-dump scheme. Her megaphone came from the platform. Engagement metrics told her she had an audience ready to buy whatever she sold. The economics made the scam inevitable.
Escalation Becomes the Only Strategy That Works
Once you’re inside this system, you can’t compete by being reasonable. Reasonable gets buried. You have to escalate or disappear.
Look at what happened in wellness spaces. Joe Wicks started as a fitness instructor posting workout videos. Harmless stuff. But Marie Claire covered his shift toward increasingly extreme anti-processed-food rhetoric that nutritionists warned was oversimplifying complex health issues. The more dramatic his claims, the more engagement he got. Going back to balanced advice would’ve tanked his reach.
Axios reported in July 2025 that Gen Z influencers were giving RFK Jr.’s anti-vaccine movement serious momentum by making increasingly wild health claims. These weren’t fringe accounts. They were mainstream creators with millions of followers who discovered that vaccine skepticism drove more engagement than actual medical advice.
By August, “MAHA mom-fluencers” were building entire brands around medical conspiracy theories, selling supplements and courses to audiences they’d grown through escalating fear.
The National Guardsman’s alleged progression follows the exact same arc. You don’t start by shipping guns to Al Qaeda. You start with edgy political posts. Then more extreme ideology. Then actionable plans. Each step tests how far you can go while growing your audience. More reach means more followers who expect you to keep pushing boundaries. By the time you’re 3D-printing weapons, you’re just giving your audience what the engagement metrics told you they wanted.
The Algorithm Doesn’t Care What You’re Escalating
The algorithm treats all escalation the same. It can’t tell the difference between a fitness influencer making increasingly dramatic health claims and a guardsman planning domestic terrorism. In fact, both generate engagement. Both keep users on platform. Both make money.
Pig-butchering scams prove this. Research published in 2025 showed these romance scams work by escalating emotional investment over weeks or months, building to larger and larger “investment opportunities.” The scammers use the same engagement optimization that legitimate creators use—they’re just optimizing for financial manipulation instead of ad revenue.
Reuters reported in October 2025 that Brazilian scammers using Gisele Bündchen deepfakes in Instagram ads stole millions using platform advertising tools built for legitimate businesses.
The platforms gave scammers the same promotional tools they gave everyone else. Why? Because scam ads generate engagement. Victims clicking through, commenting, sharing with friends asking “is this real?”—it all counts the same to the algorithm. The fact that it’s fraud is somebody else’s problem.
“But Most Creators Don’t Do This”
Sure, most creators don’t escalate to extremism or scams. Most people posting on these platforms are just sharing normal stuff. The problem is bad actors abusing the system, not the system itself. Plus, platforms are labeling misinformation now and removing harmful content, right?
Wrong. This argument misses how economic incentives work.
Yes, most creators don’t radicalize. However, the ones who do escalate are the ones the algorithm rewards. They get the reach ,the money, and then become the template other creators see working.
The Week covered Casey Means, a wellness influencer with no traditional medical qualifications who got nominated for surgeon general based largely on her massive social media following. She built that following by making increasingly bold claims about nutrition and health that went way beyond her actual expertise. Platform economics made her more influential than thousands of actual doctors who stuck to peer-reviewed science.
When extremism is more profitable than accuracy, the system selects for extremism. Period.
Content Moderation Is Theater
Platforms love to announce they’re removing harmful content and labeling misinformation. Great PR. But Yale research from September 2025 found that flagging misinformation reduces engagement by maybe 5-10%. That’s not fixing the problem. That’s accepting 90% of the problem as the cost of doing business.
Davidson College found in October 2024 that simple corrections can slow misinformation spread—but only if the platform actually shows those corrections to the people who saw the original false claim. Most platforms don’t do that. They show corrections to people who don’t need them and let the misinformation keep circulating to everyone else.
The National Guardsman allegedly operated on Discord for months, building a following around increasingly explicit extremist content. The platform didn’t shut him down. Federal agents did.
Indonesia had to publicly pressure TikTok and Meta in August 2025 to act against harmful content because the platforms weren’t doing it on their own. These companies only enforce rules when the PR cost of not enforcing them exceeds the profit they’re making from the harmful content.
That’s not moderation. That’s cost-benefit analysis.
This Is Getting Worse
Every platform that launches a creator fund or monetization program speeds up this pattern. They’re not trying to reward extremism. They’re trying to compete for creators and users. But when you pay for attention without caring how that attention got generated, you’re funding an arms race in sensationalism.
The guardsman’s alleged plot isn’t an aberration. It’s what happens when someone takes platform incentives to their logical conclusion. You want reach? Post something shocking. You want more reach? Post something more shocking. You want status and influence? Keep escalating until you’re the most extreme voice in your space.
Wellness influencers escalate to anti-vaccine conspiracies. Crypto promoters escalate to pump-and-dump schemes. Romance scammers escalate to six-figure frauds. And yeah, sometimes National Guardsmen escalate to domestic terrorism. The platform economics work the same way for all of them.
When a business model systematically rewards escalation, you get systematic escalation. When platforms profit from keeping users engaged regardless of what’s engaging them, they become economically dependent on the exact content they claim to be fighting.
Because It’s Profitable
Strip away the PR language and the content moderation theater, and you’re left with a simple truth: extreme content makes platforms more money than normal content. It generates more engagement. Keeps users on platform longer. Creates more advertising opportunities. And it costs the platforms basically nothing to host.
The costs—radicalization, scams, misinformation, actual terrorism—get pushed onto everyone else. Society pays those costs. Victims pay those costs. Federal agents investigating alleged terror plots pay those costs. The platforms just collect the advertising revenue and creator fund deductions.
That’s why this keeps happening. Not because platforms are evil. Because they’re profitable.
What This Means for Everything Else
Once you see this pattern, you can’t unsee it. Every influencer drama. Every viral scam. And even every conspiracy theory that somehow gets more reach than actual reporting. They’re all products of the same economic system.
The National Guardsman who allegedly planned an American caliphate, manufactured 3D-printed guns, and coordinated with Al Qaeda wasn’t operating outside the platform economy. He was operating inside it, following the same engagement-maximizing strategies as everyone else, just taking them to their most extreme conclusion.
The platforms didn’t make him a terrorist. But they made terrorism more visible, more profitable, and more socially rewarded than whatever he might have done instead. They created a market for escalation and paid him in reach and status for supplying it.
How many more National Guardsmen are out there right now, watching their engagement numbers climb as they post increasingly extreme content, learning from the platform metrics that escalation is the path to influence? And what happens when they figure out—like the wellness influencers and crypto scammers already have—that this audience they’ve built will follow them anywhere?
That’s why you need to understand why platforms reward extremism. Because the business model isn’t changing. It’s spreading.







