Over the past few decades, the internet has gone through several major transformations — from a niche space for technical users, to a mainstream social hub, and now increasingly toward a machine-operated environment.
Each phase has brought both progress and trade-offs. Today, we’re entering a new stage that may fundamentally change what the internet is — and who it’s for.
With the growing adoption of AI-generated content and search engine answers that remove the need to visit websites at all, we’re seeing early signs of what some call the “Dead Internet” theory: a web where content is largely produced, managed, and consumed by machines, not people.
This may be the real Web 3.0 — not the version hyped through crypto, tokens, and blockchain — but a new generation of the web dominated by generative models, algorithmic curation, and minimal human authorship.This trend raises important questions. Not in a dystopian or alarmist sense, but in a practical, long-term one: What happens to the open web when it no longer needs people to contribute? Or worse, makes it no longer worth the time and effort for people to contribute their knowledge?
A Web Built by Experts
The early internet was created and maintained by people with technical knowledge and purpose. Publishing content in the 90s or early 2000s required real effort: hand-coding HTML, running your own server, understanding domain registration, and writing with clarity for a mostly technical audience (Fun fact: the first website I made was hand-coded in Notepad.exe, weighed just a few hundred KBs, and proudly featured the — at the time — revolutionary Comic Sans font, introduced in 1994.)
That barrier to entry shaped the kind of content we saw online. Forums like Usenet, early blogs, and personal websites often reflected deep knowledge, specialized interest, or a desire to share insights with others. It wasn’t about monetization — it was about curiosity, contribution, and community.
Search engines like Google played an important role by surfacing this knowledge to a wider audience. In return, websites were rewarded with visibility and sometimes modest income through ads or affiliate links. The ecosystem, while imperfect, was relatively balanced.
The Social Media Shift
By the late 2000s and into the 2010s, platforms like Facebook, Twitter, and Instagram changed everything. The focus moved from information to engagement. Content creation became drastically easier, but also less intentional. Rather than building long-form resources or publishing on personal websites, users began posting short updates optimized for likes, shares, and algorithmic reach.
This lowered barrier was both a democratizing force and a double-edged sword. On the one hand, it allowed billions to participate in online conversations. On the other, it flooded the web with noise — repetition, outrage, misinformation, and distraction. Much of the thoughtful, evergreen content of the early web was buried under a sea of reactive, disposable media.
Still, the web was alive. It was human-driven. Content came from people, even if the quality varied widely.
Enter the Machine-Operated Web
Today, a new shift is underway. The rise of large language models (LLMs) and tools integrated into search engines — such as Google’s AI Overviews — are changing how people access information. Increasingly, users don’t visit websites at all. They simply ask a question and receive an AI-generated summary drawn from hundreds or thousands of sources behind the scenes.
At first glance, this seems efficient. But it has significant ripple effects.
- Website traffic declines because users get answers without clicking.
- Content creators receive no attribution, feedback, or compensation.
- New content becomes harder to sustain, especially if it’s time-intensive or costly to produce.
- The incentive to publish openly on the web diminishes.
If fewer people are writing, discussing, or sharing new knowledge online, we may eventually reach a point where the visible internet becomes a kind of stagnant archive — useful, but no longer evolving. At the same time, AI systems trained on existing content will keep generating new answers, even if the underlying knowledge base stops growing.
That’s the essence of what some refer to as the “Dead Internet” idea — not that the internet disappears, but that its vitality and originality are replaced by automation and passive consumption.
As a personal example, this blog you are reading right now has lost 75% of its visitors in the last couple years:

The visits to xaviesteve.com have dropped around a 75% in the last couple of years
Google’s Role in the Transition
Search engines — especially Google — are accelerating this shift. For many years, Google connected users to web pages. But as of 2024 and 2025, likely motivated by the pressure to maintain user engagement and shareholder value, its interface has started answering questions directly using AI, often summarizing from multiple sources without requiring a single click.
The benefits for users are clear: faster answers, no need to sift through multiple sites, and often a smoother experience. But the downside is structural: content producers lose visibility and traffic, even as their material is used to power those answers.
This change subtly transforms the relationship between the web and its most influential gatekeeper. Google and all other LLMs no longer simply index the web — they compete with it. And in doing so, they could unintentionally erode the very content ecosystem they depend on.
What’s At Stake Is Not Just Traffic, But Purpose
This isn’t just about SEO or ad revenue. It’s about the broader role of humans in the digital information landscape.
- Will people still write thoughtful blog posts if no one reads them?
- Will communities still answer questions in forums if the answers are quietly extracted and used elsewhere?
- Will small publishers maintain open websites if their content is never credited or visited?
When the primary interface to the internet is a machine that paraphrases from other machines, the open web becomes less relevant. And when the incentives to participate vanish, the open web risks fading altogether.
That’s not to say that AI should be avoided. It’s a powerful tool that I use on a daily basis, and when used responsibly, it can enhance understanding and access. But if it becomes the default endpoint for online knowledge — rather than a pointer to it — we risk turning the web from a living commons into a read-only utility.
Looking Ahead: How Do We Preserve Human Participation?
The challenge is not to stop progress, but to balance automation with sustainability. If AI and search engines become the dominant way people interact with information, then they must also help sustain the system that feeds them.
- Attribution and linking: AI-generated answers should include clear links to source content.
- Compensation models: Search engines and AI companies could pay licensing fees for content they use — even pennies on the dollar would go a long way.
- Interface nudges: Tools could guide users to explore full articles or discussions rather than just accepting summaries.
- New formats: Websites may need to evolve toward more interactive, real-time, or community-driven experiences that AI can’t replicate easily.
Above all, we need to ensure that humans remain involved — not just as consumers, but as creators and contributors. The open web only works if people still care enough to share their knowledge freely.
Conclusion: A Quiet Turning Point
The internet isn’t dead. But it may be changing into something quieter, less participatory, and more machine-managed than it once was.
Whether that’s a good or bad thing depends on how we respond. If we value the diversity, depth, and serendipity that came from people publishing for other people, we’ll need to find ways to protect and incentivize that activity — even in an AI-saturated future.
If we don’t, we may end up relying on machines to approximate what humans once shared directly — losing context, creativity, and community along the way.

No comments yet