Wikipedia is one of the last bastions of the old good Internet, where humans work together to produce useful knowledge, not the social media hellhole that has metastasized into a cultural cancer dedicated to spewing sensational drivel to degrade, subdue, and monetize humanity in service of the surveillance-capitalist oligarchy.
In its defense of knowledge and human dignity, Wikipedia is wisely banning text generated by the large language models, the pattern predictors that we casually call artificial intelligence:
Text generated by large language models (LLMs)[1] often violates several of Wikipedia’s core content policies. For this reason, the use of LLMs to generate or rewrite article content is prohibited, save for the exceptions given below.
Editors are permitted to use LLMs to suggest basic copyedits to their own writing, and to incorporate some of them after human review, provided the LLM does not introduce content of its own. Caution is required, because LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited.
The use of LLMs to translate articles from another language’s Wikipedia into the English Wikipedia must follow the guidance laid out at Wikipedia:LLM-assisted translation.
Some editors may have similar writing styles to LLMs. More evidence than just stylistic or linguistic signs is needed to justify sanctions, and it is best to consider the text’s compliance with core content policies and recent edits by the editor in question [Wikipedia: Writing Articles with Large Language Models, updated 2026.03.25, retrieved 2026.03.26].
Algorithms are generating more online content than humans. We humans need some all-organic discourse space, like Wikipedia… and this blog. Don’t bury the human voice in machine noise.
Lets just all sit back and let AI take over. What could go wrong?