LLMs Can Get Brain Rot

Conclusion

In this work, we introduced and empirically validated the LLM Brain Rot Hypothesis, demonstrating that continual exposure to junk data—defined as engaging (fragmentary and popular) or semantically low-quality (sensationalist) content—induces systematic cognitive decline in large language models. The decline includes worse reasoning, poorer long-context understanding, diminished ethical norms, and emergent socially undesirable personalities.

Fine-grained analysis shows that the damage is multifaceted in changing the reasoning patterns and is persistent against large-scale post-hoc tuning. These results call for a re-examination of current data collection from the Internet and continual pre-training practices. As LLMs scale and ingest ever-larger corpora of web data, careful curation and quality control will be essential to prevent cumulative harms.

Last edited by @suen 2025-10-22T15:04:16Z

1 Like

ai这年头也有脑雾…

1 Like

?bur這不我們高二英語閱讀嗎?

1 Like

哦不對這個是ai brain rot

1 Like

并非,这个是ai会得脑雾,高二那个纯脑雾

3 Likes

英语考这么难的先私一会慕

2 Likes

問題提出——做實驗——收集數據——總結——展望未來

4 Likes

还有可能是超绝食物安全问题。。。

2 Likes