When AI Learns From Social Media, Everyone Loses
- Arash Nia

- Nov 3
- 2 min read

Large language models can develop AI brain rot when trained on junk social-media text.
That’s what a new paper from Texas A&M, UT Austin, and Purdue University found: exposure to low-quality, engagement-optimized social media texts led to “thought-skipping” behavior.
This means LLM's reasoning ability drops, long-context comprehension drops, and ethical alignment degrades.
Normally, for a model to reason well, it might follow a chain of thought: step 1 → step 2 → step 3 → conclusion. With “thought-skipping”, the model truncates or skips parts of that chain (e.g., step 2 or step 3), jumping prematurely to a conclusion or producing less structured reasoning.
In other words, when AI consumes what today’s social media rewards most, it starts thinking worse.
Why does this happen?
Social media content is often short, highly engaging (clicks, likes), but low in deep semantic richness (shallow reasoning, sensationalist, meme-type).
If an LLM sees lots of these “engagement-bait” bits, the learning signal pushes it toward patterns that favor brevity, punchlines, and high engagement rather than full logical chains.
What's the takeaway for all of us training LLMs or building social media?
In the race to make artificial intelligence smarter, the broken business models of incumbent social platforms are teaching it how to be just as stupid as we are.
When digital ecosystems reward virality over depth and attention over accuracy, the output becomes predictable: shallow, repetitive, and detached from meaning.
And now that same incentive system is shaping the data AI learns from.
As AI becomes more integrated into how we work and consume information, it’s imperative to address this issue.
It’s becoming a data integrity problem, one that will impact how intelligent, aligned, and trustworthy our future systems become.
If we keep optimizing for eyeballs instead of value, we’ll keep training both humans and machines to prioritize the wrong things.
We used to say, “You are what you eat.”
Now, it’s “AI is what we post.”
The more AI integrates into our daily lives, the more urgent it becomes to fix the social platforms that teach it and us how to think.
"LLMS CAN GET “BRAIN ROT”!" paper I mentioned above for the curious: https://arxiv.org/abs/2510.13928





Comments