Earlier this year, X repeatedly shared sexually explicit images of Taylor Swift. The photo was almost certainly created with a generative AI tool, demonstrating how easily this technology can be used for nefarious purposes. This incident echoes many other apparently similar examples, including a fake image depicting former President Donald Trump’s arrest, an AI-generated image of black voters supporting Trump, and a fabricated image of Dr. Anthony Fauci. There is.
Media coverage also tends to focus on this point. sauce Because generative AI is a new technology that many people are still trying to understand. But that fact obscures why the image is relevant. Images spread on social media networks.
Facebook, Instagram, TikTok, X, YouTube, and Google search determine how billions of people experience the internet every day. This fact has not changed even in the era of generative AI. In fact, as it becomes easier for more people to create text, videos, and images on command, the responsibility of these platforms as gatekeepers becomes increasingly salient. For synthetic media to reach millions of views, as Swift’s images did in just a few hours, it requires a large aggregation network that can identify and spread initial audiences. Social media’s role as curator will become even more important as the amount of content available increases as the use of generative AI expands.
Online platforms are marketplaces of individual user attention. Users can be exposed to so many posts that they probably don’t have time to look at them. For example, on Instagram, Meta’s algorithm chooses from a myriad of content for each post that actually appears in a user’s feed. The rise of generative AI could increase the potential options for platforms to choose from by orders of magnitude. This means that individual video and image creators will be competing more aggressively for viewers’ time and attention. After all, users no longer have time to waste even as the amount of available content increases rapidly.
So what happens as generative AI becomes more widespread? Barring major changes, we can expect to see more cases like Swift Images.But we should expect more all. Change is afoot, as an overabundance of synthetic media is tripping up search engines like Google. While AI tools have the potential to lower barriers for content creators by making production faster and cheaper, the reality is that most people will have a harder time getting seen on online platforms. Masu. For example, media organizations may deploy AI tools to speed up distribution and reduce costs, but that doesn’t mean they’ll have exponentially more news to cover. As a result, your content will take up proportionally less space. Some content has already received overwhelming attention. For example, on TikTok and YouTube, the majority of views are concentrated in a small portion of uploaded videos. Generative AI may only widen the gap.
To address these issues, platforms can explicitly modify their systems to favor human authors. This sounds easier than it is, but tech companies are already facing criticism for their role in determining who gets attention and who doesn’t. The Supreme Court recently heard cases determining whether radical state laws in Florida and Texas can functionally require platforms to treat all content the same. Even if it means forcing platforms to actively surface false, low-quality, or otherwise objectionable political content. This is the wish of most users. At the heart of these conflicts is the concept of “free reach.” This is your right to have your speech promoted by platforms like YouTube and Facebook, even though there is no such thing as a “neutral” algorithm. Even chronological feeds (which some people advocate) clearly prioritize recent content over user preferences and other subjective values. The news feed, “next” default recommendations, and search results are what make the platform useful.
The platform’s past response to similar challenges is not encouraging. Last year, Elon Musk replaced X’s verification system with one that allowed anyone to buy a blue “verified” badge to increase their exposure, making the blue check mark less likely to prevent impersonation of high-profile users. The main role has been abolished. The immediate results were predictable. Opportunistic abuse by influence peddlers and scammers, and the deterioration of the quality of the feed to users. My own research suggests that Facebook has failed to limit the activities of abusive superusers who rely heavily on algorithmic promotion. (The company disputed some of the findings.) TikTok places far more emphasis on the viral engagement of a particular video than on account history, making it easier for new accounts with low credibility to garner significant attention. ing.
So what should you do? There are three possibilities.
First, platforms can reduce the overwhelming focus on engagement (the amount of time and activity users spend per day or month). Whether through regulation or various choices made by product leaders, such changes would directly reduce the perverse incentives to send spam or upload content created by low-quality AI. You will be forced to do so. Perhaps the easiest way to achieve this is to give more priority to users’ direct ratings of content in ranking algorithms. Another is to increase the rankings of externally verified creators, such as news sites, and decrease the rankings of abusive users’ accounts. Other design changes can also help, such as cracking down on spam by imposing stronger rate limits on new users.
Second, use public health tools to regularly assess how digital platforms impact at-risk populations, such as teenagers, and to assess when the harm is too great. must insist on product rollbacks and changes. This process will require greater transparency around the product design experiments that Facebook, TikTok, YouTube, and others are already conducting, and how platforms are making trade-offs between growth and other goals. This will give you some insight into what is going on. Once there is more transparency, experiments can be conducted that include metrics such as mental health assessments, among others. Bills such as the Platform Accountability and Transparency Act, which works with the National Science Foundation and the Federal Trade Commission to provide eligible researchers and scholars with access to more platform data, is an important This will be your starting point.
Third, you may consider integrating your product directly between social media platforms and large-scale language models, but you should do so with an eye to the risks. One approach that has gained attention focuses on labeling, arguing that distribution platforms should publicly indicate posts made using his LLM. Just last month, Meta signaled that it was moving in this direction, including automatically labeling posts suspected of being created by generative AI tools and requiring posters to self-disclose whether they used AI to create their content. I gave some incentives. But this is a proposition that will be lost over time. The better the LLM gets, the less anyone, including the platform gatekeepers, will be able to tell the real from the synthetic. Indeed, just as it has become implicitly accepted over time to airbrush images using tools such as Photoshop, what we consider “authentic” changes. Of course, in the future the walled gardens of distribution platforms like YouTube and Instagram may require verified provenance, including labels, to make content easily accessible. It seems certain that at least some platforms will adopt some form of this approach to cater to users who desire a more curated user experience.But what does this mean for large? It would mean an even number bigger They value and rely on the decisions of distribution networks, and are even more dependent on gatekeeping.
All of these approaches are based on core realities that we have experienced over the past decade. So in a world of near-infinite production, we may be expecting more power to be placed in the hands of consumers. But because of its impossible scale, users actually experience choice paralysis, putting real power in the default hands of the platform.
While there is no doubt that there will be urgent attacks by state-created networks of fraudulent users, profit-seeking news producers, and powerful political candidates, this issue is currently unfolding. We must not lose sight of the larger dynamics that exist. our attention.


