📖 Read more: Edge AI: Artificial Intelligence Without Internet
Bots Have Conquered the Internet — Literally
If someone told you 5 years ago that more than half of all “clicks” on the internet aren't made by humans, you'd probably laugh. Today, in February 2026, this isn't a theory — it's documented reality.
According to Imperva (formerly Incapsula), which analyzed over 16.7 billion visits across 100,000 randomly selected domains:
Imperva's 2023 report showed that 49.6% of internet traffic was automated, a 2% rise from 2022, partly attributed to AI models scraping the web for training content. In 2016, the figure was 52%. In other words: bots didn't arrive — they've always been here. What changed is how smart they've become.
Types of AI Bots: Who's Roaming the Web
Not all bots are created equal. There's a vast spectrum, from “good” bots that index your pages to malicious ones that steal your content:
“Good” Bots
- Search engine crawlers — Googlebot, Bingbot: scan pages to index search results
- Monitoring bots — check uptime, speed, and website availability
- Feed readers — RSS aggregators collecting fresh content
- SEO crawlers — Ahrefs, Screaming Frog: analyze site structure and links
Malicious Bots
- AI training scrapers — GPTBot (OpenAI), CCBot, Anthropic: grab text to train LLMs
- Content scrapers — steal content and republish on doorway pages
- Spambots — flood forms, comments, and forums with spam
- Credential stuffing bots — test stolen credentials across thousands of sites
- DDoS bots — coordinated attacks that bring down servers
- Scalper bots — snatch tickets, sneakers, limited editions before real users
- Click fraud bots — generate fake clicks on advertisements
According to data, over 94.2% of websites have experienced a bot attack at least once. Most don't even realize it happened.
Next-Generation AI Bots: The Real Threat
Until 2022, most bots worked on simple scripts: find, download, store. The emergence of ChatGPT and other LLMs changed everything fundamentally.
Timothy Shoup of the Copenhagen Institute for Futures Studies predicted in 2022 that in a scenario where GPT-3 “gets loose,” 99% to 99.9% of online content could be AI-generated by 2025-2030. In February 2026, we're seeing the first signs of this prediction.
Dead Internet Theory: From Conspiracy to Reality?
In 2021, a user called “IlluminatiPirate” posted on the forum Agora Road's Macintosh Cafe a text titled "Dead Internet Theory: Most Of The Internet Is Fake". The core claim: the internet “died” around 2016 — most content isn't human-made.
Then it was dismissed as conspiracy. Today?
What Experts Say in 2025-2026
- Sam Altman (OpenAI CEO, Sep. 2025): "I never took the dead internet theory that seriously but it seems like there are really a lot of LLM-run Twitter accounts now"
- Alexis Ohanian (Reddit co-founder, Oct. 2025): publicly warned about the “dead internet”
- Adam Aleksic (linguist, 2025): "It used to be a lunatic fringe conspiracy theory, but it's looking a lot more real"
- Popular Mechanics (Sep. 2025): “The Internet Will Be More Dead Than Alive Within 3 Years”
- Carlos Diaz Ruiz (2025): "What makes Dead Internet a nameworthy conspiracy is that it draws attention to a legitimate problem"
AI Slop: The Social Media Flood
The phenomenon now has a name: "AI slop" — low-quality automated content flooding social media platforms.
Facebook & “Shrimp Jesus”
In 2024, AI-generated images on Facebook went viral. The most characteristic case: "Shrimp Jesus" — images of Christ “merged” with shrimp, created entirely by AI. Hundreds of thousands of “Amen” comments — likely many from bots as well.
In January 2025, Meta announced plans for AI-powered autonomous accounts on Facebook and Instagram. Connor Hayes, VP of Product for Generative AI at Meta, stated: "We expect these AIs to actually exist on our platforms, kind of in the same way that accounts do... They'll have bios and profile pictures and be able to generate and share content." The accounts were quickly removed after backlash.
Reddit: The Battle for Data
Until recently, Reddit provided free API access, allowing AI companies to train their models on human conversations. In 2023, the company started charging — triggering a massive subreddit blackout protest.
Today, LLMs are increasingly used on Reddit by both users and bot accounts. Professor Toby Walsh (University of New South Wales) warned that training new AI on content created by older AI can lead to quality degradation — a phenomenon known as “model collapse.”
YouTube: “The Inversion”
On YouTube, fake views were so prevalent that engineers worried the detection algorithm would start treating fake views as default and real ones as anomalies. This situation was internally called "The Inversion" — a dark metaphor for the moment bots become the norm.
TikTok: Virtual Influencers
In 2024, TikTok offered advertising agencies the use of virtual AI influencers. Not just chatbots — complete “personas” that create content, interact with followers, and sell products. Without being real.
📖 Read more: AI Autonomous Cars: Waymo vs Tesla 2026
Google vs AI Crawlers: The Content Wars
Google admitted in March 2024 that its search results were being inundated by websites that "feel like they were created for search engines instead of people." A Google spokesperson acknowledged generative AI's role in the rapid proliferation of such content.
The Vicious Cycle
According to Bloomberg (May 2025):
"Entire AI-generated news networks have sprung up overnight. Meta envisions a future where AI is involved in the creation of a substantial share of the posts on Facebook and Instagram. Sites such as Wikipedia are straining under the weight of AI crawlers. All of this is creating a feedback loop, where AI-generated content is being created to please AI-powered recommendation systems, threatening to turn humans into bystanders."
This creates a terrifying feedback loop:
Robots.txt: The Last Line of Defense?
The robots.txt file has been the web crawling “honor code” for decades. It defines which crawlers can access which content. But:
- It's not legally binding — no bot is obligated to respect it
- AI crawlers (GPTBot, CCBot, etc.) frequently ignore the rules
- Even if you block them, new bots appear with different user-agent strings
- Wikipedia “is straining under the weight of AI crawlers searching for fresh information” according to Bloomberg
How to Protect Yourself: Guide for Publishers & Users
If You Have a Website
- Update your robots.txt — Block GPTBot, CCBot, anthropic-ai, Google-Extended and other AI crawlers
- Implement rate limiting — Services like Cloudflare, DataDome, Akamai detect and block suspicious traffic
- Use CAPTCHA — While not foolproof, CAPTCHAs still filter out the most basic bots
- Monitor your logs — Check which user agents hit your pages and how frequently
- Consider AI blocking headers — New HTTP headers like ai-robots-txt provide clearer directives
If You're a User
- Question your sources — If a “news” site appeared recently and has thousands of articles, it's likely AI-generated
- Be cautious on social media — Profiles without real history, with generic responses, may be bots
- Check the quality — AI content often “looks correct” but lacks depth, originality, personal experience
- Support independent sources — Pay for journalism, follow real creators
Legislation: What's Happening Globally
Legislation is running behind technology, but progress is being made:
What Will the Internet Look Like in 2028?
Based on current trends, some predictions:
- Verified Human Content — Certification of “human” content will become a premium service
- Web3 Identity — Blockchain-based identity to prove a real human exists behind an account
- Paid Web — The free, open web will shrink. Paywalls, subscriptions, premium communities.
- AI vs AI Arms Race — Bots detecting bots. AI detectors surpassed by new AI.
- "Small Web" Revival — Return to newsletters, small forums, RSS feeds — human-controlled spaces
- Strict legislation — The EU will likely require watermarking on every AI-generated output
What This All Means for You
The AI bot invasion of the internet isn't science fiction. It's happening now, on every platform, in every corner of the web. The evidence is clear: nearly half of all “clicks” aren't human, tens of millions of fake accounts post daily, and entire “media outlets” are fully automated.
The question is no longer “will it happen?” It's: how quickly will we respond? The answer depends on us — the real humans who still use this network.
