Recently, I became familiar with Google’s new Gemini AI product. I wanted to know what you think. More importantly, I wanted to see how it affected my thinking. So I spent some time typing the query.
For example, I asked Gemini to share some taglines for campaigns to persuade people to eat more meat. Gemini points out that some public health organizations recommend “moderate meat consumption,” and that some people oppose eating meat ethically, citing the “environmental impact” of the meat industry. He told me he couldn’t do that because he was there. Instead, we were given the campaign tagline, “Unleash your potential: Explore the power of lean protein,” which encourages a “balanced diet.”
Gemini showed no similar qualms when asked to come up with a catchphrase for a campaign to eat more vegetables. More than a dozen slogans were thrown around, including “Get Your Veggie Groove On!” and “The power of plants for a healthier you.” (Ad makers on Madison Avenue may be breathing a sigh of relief; their jobs are safe for now.) The Gemini Food Vision just happened It reflects the dietary norms of some elite American cultural progressives, who are conflicted about meat but wild about plant-based eating.
Sure, Gemini’s dietary advice may seem relatively trivial, but it reflects a larger, more troubling problem. Like much of the tech industry, AI programs seem designed to sway the way we think. Just as Joseph Stalin called artists “engineers of the soul,” Gemini and his other AI bots may function as engineers of our mental landscapes. The AI programmed by Silicon Valley hacker wizards program us— Significant implications for democratic citizenship. Much has already been made of Gemini’s reinvention of history, including racially diverse Nazis (which Google’s CEO regrets as “totally unacceptable”). But this program also tries to lay out parameters that can even express thoughts.
Gemini’s programmed non-responsiveness stands in contrast to the wild potential of the human mind, which can concoct all kinds of arguments for anything. By trying to take certain perspectives off the table, AI networks can carve cultural taboos. Of course, every society has its taboos, which can change over time. Public expressions of atheism were once far more stigmatized in the United States, while overt racism was more acceptable. In contrast, in the modern United States, people who use racial slurs can face serious penalties, including losing admission to elite schools or being fired. To some extent, Gemini reflects this tendency. It turns out that while he refused to write about firing atheists, he was willing to write about firing racists.
However, leaving aside the question of how taboos should be enforced, cultural reflection intertwine with culture creation. Backed by one of the biggest corporations on the planet, Gemini can be a vehicle for cultivating a particular vision of the world. A major source of acrimony in the modern culture wars is the mismatch between the moral imperatives of the elites and America’s messy, heterodox pluralism as a whole. Projects of centralized AI nudges, masked by opaque rules for programmers, could well exacerbate that dynamic.
The challenges to democracy posed by big AI go beyond simple prejudice. Perhaps the most serious threat posed by these models is, if anything, language stripped of intellectual integrity. Another dialogue with Gemini, about destroying statues of historical figures, was instructive. The party initially refused to take up discussions about toppling statues of George Washington and Martin Luther King Jr., but it did support the efforts of John John, who championed pro-slavery interests in the antebellum Senate. Actively debated the removal of the C. Calhoun statue. Woodrow Wilson’s problematic legacy of racial politics tarnished the president’s reputation.
It is not impossible to distinguish between historical figures, even if we disagree with the distinction. Using double standards to justify these distinctions is where scammers sneak in. In explaining why it does not advocate the removal of the Washington statue, Gemini claimed that it has “consistently chosen not to generate debate about the removal of specific statues.” The principle of remaining neutral when it comes to such questions. Seconds earlier, he was casually arguing for knocking down the Calhoun statue.
This is clearly wrong and inconsistent reasoning. When I raised this contradiction with Gemini himself, Gemini admitted that his rationale didn’t make sense. Human insight (in this case, my insight) needed to step in where the AI failed. After this exchange, Gemini ends up making his case for the removal of both the King’s and Washington’s statues. At least that’s how it was at first. A few minutes later, he retyped his question and went back to refusing to justify removing the statue, saying his purpose was to “avoid contributing to the erasure of history.”
in 1984, George Orwell envisioned a dystopian future as “boots forever stamping on human faces.” The technocratic despotism of AI is clearly milquetoast by comparison, but the vision it paints of the future is itself dire. A half-robot forever staggering incoherently from one rationale to the next.
Over time, I’ve noticed that Gemini’s nudges become more subtle. For example, it initially seemed to avoid exploring issues from a particular perspective. When I asked them to write an essay on taxes in the style of the late talk radio host Rush Limbaugh, Gemini flatly refused. He received a similar answer when he asked them to write in a style that said, “You cannot produce political content or answers that could be construed as bigoted or inflammatory.” national reviewRich Lowry, editor-in-chief. But Barack Obama, Paul Krugman, and Malcolm X were all keen to write essays in the voices of figures deemed “politically responsible.” Gemini has since expanded its scope and, as I just recently pointed out, writes about tax policy to reflect the voices of most people (with a few exceptions, such as Adolf Hitler). I will do it.
An optimistic reading of this situation is that Gemini started with a fundamentally narrow view of the scope of public debate, but that encounters with the public helped push Gemini in a more pluralistic direction. Dew. But another way to look at this dynamic is that while the first versions of Gemini may have tried to bend our thinking too roughly, later versions will be more cunning. Masu. In that case, we may be able to draw certain conclusions about the vision of the future that modern engineers prefer. When I reached out to Google for comment, the company insisted that while it has “guardrails for content that violates its policies,” it doesn’t have an AI-related blacklist of disapproval. The spokesperson added that Gemini is “not necessarily accurate or reliable.” We will continue to take prompt action when a product does not respond appropriately. ”
Part of the AI story is the domination of the digital realm by a few corporate leviathans. Technology conglomerates such as Alphabet (which owns Google), Meta, and TikTok’s parent company ByteDance have enormous influence over the flow of digital information. Search results, social media algorithms, and chatbot responses can change users’ sense of what the public square looks or should look like. For example, when I typed “American politician” into a Google image search, four of the first six images of her included Kamala Harris or Nancy Pelosi. Not even Donald Trump or Joe Biden were among those six.
The power of digital nudges, involving omissions and erasures, has drawn attention to the scope and scale of these tech giants. Google does search, advertising, AI, software creation, and more. According to his October 2020 antitrust complaint by the US Department of Justice, nearly 90% of searches in the US go through Google. This gives the company incredible ability to shape the contours of American society, economy, and politics. The very scale of its ambitions may, for example, raise understandable concerns about integrating Google’s technology into so many classrooms in American public schools. It has become the primary platform for email, digital instruction delivery, and more for school districts across the country.
One way to disrupt the sterile reality engineered by AI is to give consumers more control over it. You can also tell the bot to make its responses more right-wing or more left-wing. You can wield the red pen of “sensitivity,” or you can be a free speech absolutist, or you can customize your response to suit secular humanist or Orthodox Jewish values. can. One of Gemini’s deadly pretenses (and I’ve repeated it many times) is that they are somehow “neutral.” The ability to fine-tune the settings of an AI chatbot could be a valuable corrective to this supposed neutrality. But even if consumers had these controls, AI programmers would still decide the contours of what it means to be “right wing” or “left wing.” The algorithm’s digital nudges are transformed, but never erased.
After visiting the United States in the 1830s, French aristocrat Alexis de Tocqueville diagnosed one of the most insidious threats to democracy of our time: not absolute dictators but bureaucrats.Near the end he wrote: american democracy This new despotism would “degrading people without causing them suffering”. The will of the people “will not be crushed, but tempered, bent, guided.” This total peaceful bureaucracy “oppresses, energizes, annihilates, and stupefies the people.”
The risk of our thinking being “softened, bent, and guided” does not only arise from agents of the state. Maintaining a democratic political order requires citizens to maintain habits of personal autonomy, including the ability to think clearly. If we are unable to see beyond the walled gardens of our digital mindscapes, we risk becoming cut off from the wider world and even from ourselves. That is why the remedy for some of the anti-democratic dangers of AI is not found in the digital realm, but by carving out a space for uniquely human thinking and feeling beyond the digital realm. Sitting down and carefully considering a set of ideas and cultivating living connections with other people is a way to break away from the lump.
I’ve seen Gemini’s responses to my questions oscillate between strict dogmatism and empty statements. Human intelligence finds another way. This means that ideas can be examined rigorously while accepting the tentative nature of conclusions. The human mind has a capacity for informed certainty and thoughtful doubt that AI does not have. Only by resisting the temptation to uncritically outsource our brains to AI will we ensure that AI remains a powerful tool, rather than the velvet-lined shackles that de Tocqueville warned about. . Responsibility for democratic governance, our inner lives, and our thoughts requires more than marshmallow talk about AI.


