Dead Internet Theory
A conspiracy theory that spread across online message boards in the mid-2010s and into the early 2020s posits that the internet is now primarily the realm of automated messages and bots, and humans have consequently been elbowed out of the conversation.
This “Dead Internet Theory” is varied in its specifics, as conspiracy theories tend to be, and it’s not always intended (by those spreading it) to be believed in its entirety.
That said, the current version of this concept claims that sometime around 2016, large-language model-based generative artificial intelligence systems (like those that power ChatGPT and DALL-E) got good enough that tech companies and governments could essentially fill up the web, including the majority of its pages and social networks and forums, with text and images and videos created artificially.
The content is (depending on who you listen to) either aligned with the big-picture goals of these entities or meant to just garble the conversation so badly that real-deal humans can’t communicate with each other and cross-pollinate the way they once did.
Live conversation on the internet is purportedly generated on the fly by chatbots capable of endlessly spouting nonsense and sparking arguments on any topic that might serve their makers’ or handlers’ goals.
That 2016 theorized date for the death of the human internet lines up with a report from a security firm called Imperva (published that year) which says that bots and their activity were responsible for 52% of all web traffic in 2016 for the first time, and while it should be noted that a lot of those bots are crawling websites for keywords to inform search engine results, not actively engaging with each other or humans on Facebook or in the YouTube comments section, that figure does provide a sense of why this theory (though still based on basically nothing) isn’t as impossible-sounding as it would have been even as recently as the early 2010s.
This concept is further boosted by revelations that a lot of social media traffic—likes and follows and such—is automated by bots and humans running bot-like systems optimized to mess with ad revenue data, promote all sorts of businesses and scams, or to provide a paid service to clients who want to juice their apparent notoriety by buying followers.
The emergence of consumer-grade GPT-based AI tools like ChatGPT in late 2022 further amplified these concerns as it’s now casually simple for an everyday person with no coding experience to wield AI bot powers, churning out convincing text, generating images and videos out of nothing, and sharing this synthetically fabricated content around the internet.
There are (perhaps warranted) concerns that the sheer volume of AI content being churned out will overshadow or overwhelm content produced by humans.
There are also concerns that, because many of these systems are trained on data and media scraped from the internet, as the net is filled with AI-generated work, future AI systems will be less-aligned with human preferences and needs because they’ll be trained on far more AI-produced content than work made by humans.
But there are also worries that this deluge of content might make these channels unusable, as while some AI products are undeniably impressive or entertaining or otherwise valuable, a potentially infinite quantity of anything can smother non-infinite versions of the same—so conversations with fellow humans on such channels could become rarer, AI making up the vast majority of potential conversation partners.
Again, there’s no reason to believe this theory or any of its associated theories are based on anything other than (at times entertaining) speculation, but the concerns informing it are predicated on genuine problems that seem to be playing out faster than folks a handful of years ago might have expected, and that means our communication channels may become less reliable until we can figure out a means of confidently filtering human signal from AI noise within this new paradigm.