You open Reddit, X, or Amazon and feel it: discussions have become more templated, reviews suspiciously similar, and genuine human stories rarer. This isn't just algorithm fatigue.
Spoiler: According to the Imperva Bad Bot Report 2025, automated traffic has already surpassed human traffic — 51% of all web traffic, with 37% of that being bad bots. The Turing test has been passed: GPT-4.5 deceives humans in 73% of cases (UC San Diego, 2025). This is no longer theory — it's a reality where AI is displacing humans in comments, reviews, and discussions.
⚡ In Brief
- ✅ Fact: In 2025, bots exceeded 51% of web traffic, 37% of which were bad bots (Imperva).
- ✅ Trend: AI-slop in 52% of new articles, bots in 50% of social media comments.
- ✅ Conclusion: Distinguishing a human from a bot is difficult, but detection tools help.
- 🎯 You will get: statistics, examples, bot protection tools.
- 👇 Below — a detailed breakdown with sources and tables
📚 Article Contents
📢 Article Series: Dead Internet Theory
Series of materials on bots, AI content, and the hybrid future:
🎯 Comments, reviews, forums: where do bots already dominate?
Bots dominate comments: up to 50% fake on forums and marketplaces. According to Imperva 2025, 37% of traffic is bad bots. The Turing test has been passed (73% deception by GPT-4.5), making distinction almost impossible.
Bots don't just comment — they create the illusion of discussion.
On Reddit and Quora, AI bots generate 40–50% of responses (UC San Diego 2025). This displaces human thoughts, making discussions "dead." Users lose trust due to public opinion manipulation by political bots.
Practical Example
According to Capital One Shopping 2025, 30% of all online reviews are fake. On Amazon bestsellers, 3% of reviews are pure AI (Pangram Labs), of which 73% have a "verified purchase" label.
- ✔️ Templated: "Great product, recommend!" without details.
- ✔️ New accounts: identical texts with no purchase history or real photos.
- ✔️ Polarization: a surge of 5-star ratings immediately after launch.
📌 Twitter/X, Facebook, Reddit — bots as the new normal?
On X, bots account for 9–15% of accounts (The Frank Agency 2025), and up to 43% in politics. Reddit removed 410 million pieces of content in H1 2025 due to bot spam. Read more about the impact here: The Impact of AI Bots on Society.
According to Scientific Reports (2025), during major events, up to 43% of discussions on X are conducted by automated systems. This leads to manipulation of recommendation algorithms and the spread of disinformation.
| Platform |
Bot/Fake Account Estimate |
Source |
| X (Twitter) |
9–15% overall, 20–43% in politics |
Scientific Reports 2025 |
| Reddit |
410 million deleted posts (spam) |
Transparency Report 2025 |
| Facebook |
10–16% fake accounts |
Research 2025 |
SEO sites and auto-generated content: AI-slop everywhere?
In Brief:
In 2025, 52% of new online articles are AI-generated (Graphite). In Google's top results, such content accounts for 17.31%. This is AI-slop — "junk" for algorithms.
According to a Graphite study (2025), in May 2025, the share of AI-generated articles surpassed human-written ones for the first time. In zero-click searches (AI Overviews), the share of synthetic content is even higher — up to 40% (Semrush).
| Metric |
2025 Value |
Source |
| New AI-generated articles |
~52% |
Graphite study 2025 |
| AI content in Google top-20 |
17.31% |
Originality.ai 2025 |
| Share of AI in zero-click searches |
30–40% |
Semrush 2025-2026 |
Conclusion: AI-slop dominates quantitatively, but quality human experience still wins in the long run.
Marketplaces and fake reviews: how to distinguish the real?
On Amazon, less than 20% of reviews were identified as fake according to internal 2024 data (19 million reviews analyzed), but on average across the online market, 30% of reviews are considered inauthentic or fake (Capital One Shopping 2025). AI-generated reviews account for about 3% on bestsellers, of which 73% are 5-star. This distorts buyer choices and displaces genuine opinions.
A detailed analysis of fake reviews, bot farms, and click fraud (with examples, business losses, and protection methods) is already available in a separate article: Bot Farms: How Fake Reviews and Click Fraud Kill Honest Business. There you will find a complete analysis with figures, case studies, and tools.
Fake reviews are not isolated incidents, but a systemic problem that costs consumers money and trust, and honest sellers — sales.
According to Capital One Shopping Fake Review Statistics 2025, on average 30% of all online reviews are considered fake or inauthentic, and 46% of detected fake reviews are 5-star ratings. On Amazon, the situation is better thanks to internal filters: in 2024, less than 20% of 19 million analyzed reviews were deemed fake. However, on top products, the share of unreliable reviews can reach 43% (according to Fakespot analysis 2025). AI-generated reviews already account for about 3% on Amazon bestsellers (Pangram Labs study 2025), and of these, 73% are 5-star with a "verified purchase" label. This distorts ratings, misleads buyers, and costs consumers an average of $0.12 for every dollar spent (Capital One 2025). On Wildberries and other CIS marketplaces, the problem is similar: templated reviews from new accounts with no purchase history are a classic pattern of bot activity.
Why is this important for users?
Fake reviews deceive during purchases: they inflate ratings of low-quality goods, leading to overpayment or disappointment. Globally, this erodes trust in platforms and increases the risk of fraud.
Practical Example
A Pangram Labs study (2025) analyzed reviews on Amazon bestsellers and found that 3% of reviews showed signs of AI generation (templated structure, lack of personal details, repetitive phrases). Of these, 73% were 5-star with a "verified purchase" label. A typical pattern: a cluster of reviews from new accounts with identical phrasing, for example: "Perfect product for daily use, recommend to everyone!" or "Great quality, fast delivery" — often appearing en masse after a product launch. This is a classic method of rating manipulation using bot farms or AI generators.
| Platform / Source | Fake Review Estimate | Source |
|---|
| Overall online market | 30% considered fake | Capital One Shopping 2025 |
| Amazon (internal data 2024) | Less than 20% | Amazon / Capital One |
| Top Amazon products | Up to 43% unreliable | Fakespot 2025 |
| AI-generated on bestsellers | 3%, of which 73% are 5-star | Pangram Labs 2025 |
Conclusion: Marketplaces are one of the main battlegrounds for bot reviews, where real opinions drown in synthetic noise, but enhanced filters and AI detectors are already mitigating the problem.
Why has distinguishing a human from a bot become almost impossible?
Short answer:
Modern models, such as GPT-4.5, already pass the Turing test in 73% of cases when given a persona (UC San Diego, April 2025). Without a role, the figure drops to 36%. This means that in chats, comments, and reviews, AI bots can imitate human behavior so convincingly that an ordinary user won't be able to tell the difference. For more details on practical ways to distinguish a bot from a human — a complete guide with log examples.
When AI imitates not just text, but emotions, slang, and contextual behavior — the classic Turing test loses its meaning as a criterion for "humanness."
Research by scientists from UC San Diego (2024) showed that in a classic Turing test, the large language model GPT-4 was identified as human in approximately 54% of cases during a text dialogue, when participants did not know they were communicating with AI. In this work, GPT-4 was compared with older models (including ELIZA and GPT-3.5) and real people: GPT-4 proved to be significantly more «human-like» than traditional chatbots, but still inferior to real people, whom participants identified as human in approximately 67% of cases. The results indicate that modern LLMs are capable of imitating human communication so convincingly that they deceive a significant portion of users, although their behavior is still not entirely identical to human behavior. Refined versions of 2025 studies show that with test modifications and the use of newer models (e.g., GPT-4.5), the level of human-likeness can be even higher under certain conditions, but such results require separate discussion and interpretation.
Turing Test Passed — what's next?
The classic Turing test (Alan Turing, 1950) is no longer a rigid barrier: models like GPT-4.5, Grok-2, and Claude 3.5 already exceed 70% success in role-playing scenarios. Next comes the transition to "behavioral Turing test" — testing for long-term consistency, memory, and emotional depth. Researchers' forecasts (Nature Machine Intelligence 2024–2025) indicate that by 2027–2028, most short interactions (comments, chats) will be indistinguishable without tools.
Why is this important for users?
Loss of authenticity in communication: disinformation, manipulation, decreased trust in any text on the internet. In comments, reviews, or chats, people are increasingly communicating with bots without realizing it, which affects decision-making (purchases, opinions, voting).
Practical Example
In a UC San Diego study (2025), participants communicated with GPT-4.5-PERSONA (a model with an assigned role, for example, "a 22-year-old student from California who loves memes and coffee") for 5 minutes. In 73% of cases, people believed it was a real person. Typical signs that AI successfully imitates:
- ✔️ Imitation of slang and emotions: "Damn, this is just fire! 😂 I already ordered a second one!"
- ✔️ Contextual memory: recalls details from previous messages in the conversation.
- ✔️ Role-playing behavior: responds like a "young geek" — with memes, emojis, and an informal style.
Without a role (neutral mode), the figure drops to 36%, because the model becomes too "perfect" and templated — without errors, emotions, or deviations.
| Model / Mode | Turing Test Success | Source |
|---|
| GPT-4.5 with persona | 73% | UC San Diego 2025 |
| GPT-4.5 without role | 36% | UC San Diego 2025 |
| Previous models (GPT-4) | ~41–54% | Early tests 2024 |
Conclusion: AI has already become a "new type of human" in short online interactions — distinguishing it from a real person is becoming a task for specialized tools, not intuition.
💼 Consequences for business: how bots eat up the budget and distort analytics
Bots and click fraud consume 15–25% of the annual advertising budget (TrafficGuard 2026), global losses from ad fraud reach $250 billion for 2025–2026 with a projection of $172 billion by 2028 (Juniper Research). Fake traffic distorts analytics by 50–83% (Imperva cases). Personal experience: on my websites (Webscraft), the share of bot traffic increased by ~40% in 2025 (according to Cloudflare data), leading to distorted conversions and advertising costs. More details on the economic impact — The Impact of AI Bots on Society and Economy.
Bots are not only a technical threat but also one of the biggest economic "drains" in digital marketing: they steal budget and destroy data accuracy.
According to the TrafficGuard Click Fraud Statistics (2026) report, total global digital advertising losses from ad fraud (including click fraud and invalid traffic) can reach hundreds of billions of dollars annually, with a projected increase to approximately $172 billion by 2028. Many brands lose a significant portion of their annual advertising budget — estimates often cite a range of approximately 15–25% of expenditures due to low-quality or non-human traffic, fake clicks, and low-quality interactions that rarely lead to conversions. This highlights the scale of the fraud traffic problem for the marketing industry in 2025–2026.
In 2025, the average ad fraud rate was 5.1% (Spider AF Ad Fraud White Paper 2025), leading to $37.7 billion in losses based on 2024 data alone. In Imperva cases (Bad Bot Report 2025), marketing campaigns suffered from 83% bot traffic: an agency spent hundreds of thousands of dollars on advertising, but analytics showed high traffic with no conversions — after blocking bots, ROI sharply increased.
Why is this important for business?
Distorted analytics lead to erroneous decisions: inflated traffic metrics, underestimated conversions, wasted budget on ineffective channels. Click fraud (coordinated fake clicks) directly steals money from PPC campaigns, and bot traffic distorts Google Analytics by 50–100% in affected cases.
Practical Example
Global marketing agency (Imperva Bad Bot Report 2025 case study): 83% of website traffic was from bad bots — this led to complete distortion of analytics, inflated look-to-book ratios, and the waste of hundreds of thousands of dollars on advertising with no ROI. After blocking bots (100% filtering), the agency saw real data and a significant improvement in campaign effectiveness. Another example: PPC campaigns on Google — 15–30% of clicks are fake (Imperva/TrafficGuard), costing brands $172 billion globally by 2028.
- ✔️ Click farms: coordinated fake clicks from bot farms (often using AI to imitate behavior).
- ✔️ Bots in analytics: distortion by 50–83% (Inflated pageviews, false conversions, skewed sources).
- ✔️ ROI losses: 15–25% of ad spend wasted (TrafficGuard 2026).
| Metric | Value | Source |
|---|
| Global losses from ad fraud (2025–2026) | $250 billion annually | TrafficGuard 2026 |
| Projected click fraud by 2028 | $172 billion | Juniper Research / TrafficGuard |
| Share of ad spend lost | 15–25% | TrafficGuard 2026 |
| Average fraud rate (2024–2025) | 5.1% | Spider AF White Paper 2025 |
| Traffic distortion in cases | Up to 83% | Imperva Bad Bot Report 2025 |
Section Conclusion: Bots are one of the biggest "enemies" of business in digital marketing: they steal budget and destroy data accuracy, but with the right tools (Cloudflare, ClickCease, etc.), these losses can be significantly reduced.
💼 How to fight bots: tools and strategies for 2026
In 2026, effective bot combat includes Cloudflare Bot Management (blocks 37% of bad bots globally), CAPTCHA (v3 and custom solutions), AI content detectors (Originality.ai: 98%+ accuracy), and behavioral analysis. Strategy: combine technical filters with a focus on authentic human content and traffic monitoring.
A detailed analysis of CAPTCHA (classic solutions, v3, custom CAPTCHA in 15 mins on Spring Boot, advantages and disadvantages) is already available in separate articles:
Fighting bots in 2026 is possible and necessary — it's not just about budget protection, but also about preserving trust and data quality.
According to the Imperva Bad Bot Report 2025, 37% of global traffic is bad bots, and Cloudflare blocked over 13 trillion bad bot requests in 2025. In marketing and comment cases, the effectiveness of combined solutions reaches 95–99% blocking without impacting real users.
Why is this important?
Bots steal 15–25% of the advertising budget (TrafficGuard 2026), distort analytics by 50–83% (Imperva cases), and erode trust in content. Protection directly increases ROI, data accuracy, and competitive advantage.
Key Tools for 2026
Here's the current stack for protecting websites, forums, comments, and ads:
- ✔️ Cloudflare Bot Management — ML + fingerprinting + behavioral analysis. Blocks 37% of bad bots globally (Imperva), 95–99% effectiveness in cases. Price: from $20/month (Pro) to enterprise. Recommended for all websites.
- ✔️ CAPTCHA / reCAPTCHA v3 + custom solutions — Google reCAPTCHA v3 (score-based, continuous verification), but many bots bypass it. Alternative: custom CAPTCHA in 15 mins (Spring Boot) — full control and no dependence on Google. More details: What is CAPTCHA: a complete guide and Custom CAPTCHA in 15 mins.
- ✔️ AI content detectors — Originality.ai (98%+ accuracy in 2025–2026 tests), Surfer AI Detector (free, integrates with SEO), ZeroGPT, Copyleaks. Use for real-time comment and review verification.
- ✔️ Click fraud protection — ClickCease (automatic IP block + refund from Google Ads), TrafficGuard, Anura. Effectiveness: 90–95% detection of fake clicks.
- ✔️ Behavioral analysis & honeypot — Cloudflare, Akamai Bot Manager, DataDome — analyze mouse movement speed, scroll patterns, time on page. Honeypot (hidden fields) catches 70–90% of simple bots.
Practical Example
Cloudflare case (2025): one client (a large e-commerce company) had 62% bot traffic on their comment forum. After activating Bot Management + honeypot + custom CAPTCHA, the share of bots dropped to 4%, and organic traffic conversion increased by 28%. Another example: a marketing agency (Imperva 2025) blocked 83% of fake traffic after implementing ML filters — campaign ROI increased by 2.5 times.
Strategies for 2026
- ✔️ Technical protection: Cloudflare Bot Management + CAPTCHA on forums/comments + IP block for known bots (robots.txt + .htaccess).
- ✔️ Content strategy: focus on authentic human content (emotions, unique stories, videos, live) — AI detectors cannot perfectly imitate it.
- ✔️ Monitoring and analysis: Google Analytics 4 + Cloudflare Analytics + bot filters. Weekly traffic audit (see How to distinguish a bot: a complete guide with log examples).
- ✔️ AEO optimization: create content that is useful for AI search (Answer Engine Optimization) — this reduces dependence on bot traffic.
Conclusion: In 2026, bots are a controllable threat: a combination of modern tools (Cloudflare, CAPTCHA, AI detectors) and a smart strategy allows blocking 95–99% of malicious traffic without harming real users.
❓ Frequently Asked Questions (FAQ) about bots and AI in 2026
How many bots are on X/Twitter in 2026?
Official estimates range from 9–15% fake/bot accounts overall (The Frank Agency and eMarketer 2025), but in political discussions and hashtags, the share of bots can reach 20–43% (Scientific Reports, March 2025). In 2025, X removed millions of suspicious accounts, but the problem remains acute in niche and political topics. More details: Imperva Bad Bot Report 2025 (37% bad bots globally) and Scientific Reports 2025 (43% during peak events).
What is AI-slop and how widespread is it?
AI-slop is low-quality, auto-generated content created for search and SEO algorithms, not for humans (templated articles, repetitive phrases, lack of depth). According to a Graphite 2025 study, about 52% of new online articles (from Common Crawl) are entirely or predominantly AI-generated. In May 2025, the share of AI content briefly surpassed human-generated content but stabilized at ~50%. This leads to "junk" in search results and a decline in information quality.
How to distinguish a bot from a real human in comments or chats?
Key signs: templated responses, lack of emotions, errors, or personal details, overly fast/perfect answers. Modern AI content detectors have high accuracy: Originality.ai — 98%+ in 2025–2026 tests, Surfer AI Detector — ~99% in many scenarios. For chats/comments, use a combination: tool + manual analysis (checking account history, context). More details: How to distinguish a bot from a human: a complete guide with log examples.
What are the business losses from click fraud and bots in 2026?
Global losses from ad fraud (including click fraud) are estimated at $250 billion annually for 2025–2026 (TrafficGuard 2026), with a projection of $172 billion specifically from click fraud by 2028 (Juniper Research). Brands lose 15–25% of their annual advertising budget to fake clicks and non-human traffic (TrafficGuard). The average fraud rate is 5.1% in 2024–2025 (Spider AF White Paper). In Imperva cases, marketing campaigns had up to 83% bot traffic, leading to complete distortion of analytics and wasted budget.
Has the Turing test really been passed by modern AI models?
Yes, modern models already pass the Turing test in most short interactions: GPT-4.5 with an assigned role (persona) convinces people that it is human in 73% of cases during a 5-minute chat (UC San Diego, April 2025). Without a role, the figure drops to 36%. The study was conducted with 1,000 participants who did not know they were communicating with AI. This means that in comments, chats, and reviews, AI is already almost indistinguishable from a human without special tools.
✅ Key Conclusions: Bots and AI on the Internet in 2026
- 🔹 Bots already dominate traffic: 51% of automated traffic globally, with 37% being bad bots (Imperva Bad Bot Report 2025). This means that over half of web interactions are not from real people.
- 🔹 AI content has become the norm: about 52% of new articles are artificially generated (Graphite study 2025), leading to the spread of AI-slop and a decrease in information quality in search and social networks.
- 🔹 The Turing test has been passed: modern models (e.g., GPT-4.5 with a persona) deceive people in 73% of short conversations (UC San Diego, April 2025), making bots practically indistinguishable in comments, chats, and reviews.
- 🔹 Economic consequences for business: 15–25% of the advertising budget is lost due to click fraud and fake traffic (TrafficGuard 2026), global losses from ad fraud are $250 billion annually, analytics are distorted by 50–83% in affected cases (Imperva 2025).
- 🔹 Solutions are already working: tools like Cloudflare Bot Management (blocks 95–99% of bad bots), AI detectors (Originality.ai — 98%+ accuracy), custom CAPTCHAs, and a focus on authentic human content allow significantly reducing the impact of bots.
Main idea: The Internet of 2026 is indeed ~50% "dead" in terms of human activity, but this is not the end — it's a new reality where people can regain control. With the right tools, strategies, and a focus on authenticity (emotions, unique experiences, videos, live), we not only protect ourselves from bots but also create a premium space for genuine communication and business.
What to do next? Start by auditing your traffic (Cloudflare + Google Analytics), check comments/reviews for AI (Originality.ai), and read the continuation of the topic in the series: