Amid a wave of AI controversy and litigation, CNET has been publicly reprimanded since it first began posting thinly veiled AI-generated content on its site in late 2022. The scandal culminated in the site being demoted from “trusted sources” to “untrusted sources” on Wikipedia. [h/t Futurism].
This change was made after much discussion among Wikipedia editors, given that CNET was founded in 1994 and maintained a top reputation on Wikipedia until late 2020. , which attracted the attention of many media members, including some CNET staff. .
Although Wikipedia is a “free encyclopedia that anyone can edit,” it’s important to remember that it is by no means the Wild West. Wikipedia’s community of editors and volunteers request citations for information added to Wikipedia pages, maintaining a degree of accountability to the larger community responsible for running Wikipedia. Although Wikipedia should not be used as a primary source, its citation requirements tend to make it a good place to at least start researching those topics.
CNET’s (seeming) fall into Wikipedia began before the discovery of AI-generated content. Back in October 2020, CNET began to come under pressure at Wikipedia due to its acquisition by publisher Red Ventures, as there was evidence of declining editorial standards and an increase in advertiser-favored content.
But after Red Ventures began posting AI-generated content on what was once one of the most reputable tech sites in November 2023, Wikipedia’s editors almost immediately challenged Wikipedia’s credibility. I started pushing for CNET to be completely demoted from the list of available sources. CNET is AI-generated, as Red Ventures’ ruthless pursuit of capital and the nature of its posting of false information on other owned sites (such as Healthline) has removed CNET from its current list of trusted sources of information. It claims to have stopped posting content.
Chess, one of Wikipedia’s editors, is quoted in the Futurist article as saying, “Before we start taking down, we have the burden of proving that Red Ventures ruined the site.” should not be repeatedly imposed on editors; they can easily buy or start another site. Focus on the common denominator here, Red Ventures, and find the source of the problem (the spam network). I think we need to target it.”
This is a really scalding opinion, but maybe it’s not surprising. The problem here is not only that his use of generative AI in an article published on one of the most famous technology news sites of all time is hidden. Rather, the fact is that AI-generated articles tend to be poorly written and inaccurate.
Before the age of AI, Wikipedia editors already had to deal with automatically generated unwanted content in the form of spambots and malicious actors. In this way, editors’ treatment of AI-generated content is surprisingly consistent with past policy. I mean, it’s just spam, right?
On a related note, a few months ago a self-proclaimed “SEO thief” was discovered on Twitter. This might not have been discovered if those responsible hadn’t looked at their competitors’ sites, run everything on AI, and immediately openly boasted about their “results.” AI generates entire competitive website with 1,800 articles It targets the same niche market to “steal a total of 3.6 million traffic from competitors.”
The site affected by this so-called SEO heist is called Exceljet, a site run by Excel expert David Bruns that helps others make Excel easier to use. In addition to having the fruits of his labor stolen in perhaps the most despicable and laziest way, Brands also found that much of the content was inaccurate. Fortunately, Hubspot’s coverage of this article also explains how Google finally figured out the issue.
Unfortunately, the rise of generative AI is also starting to take a toll on the usable internet, which includes content written by humans who can test things and truly understand them. We hope that articles like this one will help deter publishers from ignoring quality controls and auto-generating misleading content.
Especially in light of cases like New York Times v. OpenAI and Microsoft , we see that stealing the work of others is almost necessary for these so-called generative AIs to function. At least it still works when a regular thief steals things. Generative AI can’t even guarantee that the results are accurate, especially if you don’t already have the expertise to tell the difference.


