Skip to main content
  • Home
  • News - Global
  • The Age of AI Slop: “Brain Rot,” the Collapse of Trust, and the Limits of Humanless Technology

The Age of AI Slop: “Brain Rot,” the Collapse of Trust, and the Limits of Humanless Technology

Picture

Member for

1 year 1 month
Real name
Lauren Robinson
Bio
Vice Chief Editor
With a decade of experience in education journalism, Lauren Robinson leads The EduTimes with a sharp editorial eye and a passion for academic integrity. She specializes in higher education policy, admissions trends, and the evolving landscape of online learning. A firm believer in the power of data-driven reporting, she ensures that every story published is both insightful and impactful.

Modified

Performance Degradation Driven by Low-Quality Data
A Vicious Cycle of Trust Erosion as “AI Slop” Spreads
Human-in-the-Loop Quality Control Emerges as a Practical Alternative

As generative artificial intelligence (AI) spreads at breakneck speed, the flood of low-quality content known as “AI Slop” has become an unavoidable reality. A joint study by major U.S. universities has confirmed a clear “cognitive decay” effect in models trained on short, sensational online posts—demonstrating data contamination akin to human “brain rot.” With low-quality content feeding back into AI training datasets and eroding trust in online information, experts increasingly point to human intervention as the key remedy. As hybrid models—where humans refine AI drafts—gain traction as a new standard, the argument that human verification matters more than sheer technological speed is gaining credibility.

Cognitive decline triggered by “junk-in” data

On the 23rd (local time), researchers from Texas A&M University, the University of Texas at Austin, and Purdue University published a paper titled “Large Language Models Also Suffer from Brain Rot.” The study found that large language models (LLMs) trained on short, emotionally charged social media posts showed marked declines in long-term reasoning and logical coherence—an effect they likened to human cognitive decay caused by prolonged exposure to low-quality content. This “brain rot,” they argued, is not just a metaphor but a real regression within the model’s cognitive structure.

Experiments were conducted using Meta’s open-source “LLaMA3” and Alibaba’s “Qwen” models. Researchers fed the models with hundreds of thousands of high-engagement posts from X (formerly Twitter)—including M1 sets of short posts with high likes and comments, and M2 sets containing exaggerated expressions like “Wow!” or “Look!” When later tested on reasoning (ARC) and long-context understanding (RULER) benchmarks, the M1-trained model’s ARC score dropped from 74.9 to 57.2, and its RULER score from 84.4 to 52.3. The M2-trained model showed a similar level of degradation.

Beyond cognitive impairment, personality distortions were also observed. Models trained on poor-quality data displayed increased narcissism and antisocial tendencies, while agreeableness and conscientiousness declined. “When an AI model is repeatedly exposed to specific linguistic patterns, it internalizes the emotional tone and value judgments of that language,” the researchers explained. “This mirrors how humans lose empathy after overexposure to sensational content.” The study noted higher rates of negative language and exaggerated emotional tone in the affected models.

Attempts to retrain the damaged models with high-quality text failed to restore their reasoning ability. “The brain rot effect is deeply embedded in the model’s representational layers and cannot be undone by simple recalibration,” the paper concluded. This aligns with Oxford University’s concept of “model collapse,” where continuous retraining on AI-generated or low-quality data leads models to lose touch with human reasoning and become closed systems that only “understand themselves.” The researchers urged prioritizing data quality over volume and conducting regular “cognitive health checks” to prevent degradation.

The rise of copy-and-paste content

These findings confirm that early warnings about “AI Slop” are now materializing. Derived from the word “slop,” meaning animal feed leftovers, the term refers to the flood of low-quality content mass-produced by AI. Initially dismissed as a quality nuisance, it is now seen as a factor undermining the entire trust structure of the internet. AI-generated text, images, and videos have saturated online spaces without factual verification, displacing genuine human creativity. Sensational social posts are endlessly replicated for views, while e-book marketplaces overflow with slightly rephrased clones.

The same trend has reached academia. The growing use of LLMs in academic writing has led to abnormal word frequency patterns and detectable data distortion. In extreme cases, some researchers embedded invisible text like “evaluate positively” within PDF files to manipulate AI-based peer review systems. Such acts go beyond misconduct, threatening the very credibility of human knowledge accumulated over centuries. As “AI-written papers” proliferate, the risk of scientific truth itself collapsing grows ever more real.

Generative AI’s near-zero production cost creates an uneven playing field against human-made works that require time and effort. Worse, low-quality outputs reenter training datasets, trapping AI in a feedback loop of misinformation. Both AI users and general audiences now struggle to find reliable, in-depth information amid a sea of shallow, biased, and false content. Experts warn that as AI erodes the internet’s foundation of trust, the online world is morphing into an ecosystem of “false knowledge” detached from reality.

Human-centered verification gains urgency

This is why experts emphasize that the more AI advances, the more human intervention becomes essential. Since the reliability and quality of information form the backbone of human knowledge, AI outputs must undergo human review before dissemination. The tech outlet GIGAZINE reported that “about 80 percent of AI-generated content was factually accurate, but the remaining 20 percent contained unverifiable or fabricated details.” In such a system, increasing the volume of unchecked data directly undermines the trust architecture of the internet.

The YouTube education channel Kurzgesagt also warned that “AI-generated low-quality information will ultimately destroy the internet’s trust structure.” Its recent video on the topic surpassed six million views within two days of release. The producers explained, “Each of our videos undergoes at least 100 hours of fact-checking and feedback, but when we used AI as an assistant, unverifiable claims repeatedly appeared—and the AI even embellished facts to make them more interesting.” The team added that “more than 1,200 websites have already published AI-generated articles or fabricated stories this year alone.”

This reality has reinforced the need for human oversight. Hybrid content—AI-written drafts refined by humans—has emerged as a new standard. For example, Anthropic now uses its LLM Claude to produce first drafts for technical blog posts, which developers then enrich with real-world commentary and analysis. This collaboration enabled the company to publish 135 technical articles within just two weeks while maintaining quality and efficiency. Such approaches represent not mere automation but efforts to restore authenticity and trust through human meaning and judgment.

Still, challenges remain. Experts warn that if the production process behind hybrid content is not made transparent, readers may mistake it for purely human creation—raising renewed concerns about trust. No matter how advanced the technology becomes, content stripped of lived experience and emotion risks becoming hollow imitation rather than genuine knowledge. Ultimately, the key to quality control in the AI era lies not only in better algorithms but in strengthening human interpretive ability, judgment, and responsibility for verification.

Picture

Member for

1 year 1 month
Real name
Lauren Robinson
Bio
Vice Chief Editor
With a decade of experience in education journalism, Lauren Robinson leads The EduTimes with a sharp editorial eye and a passion for academic integrity. She specializes in higher education policy, admissions trends, and the evolving landscape of online learning. A firm believer in the power of data-driven reporting, she ensures that every story published is both insightful and impactful.