During an earnings call in July, Meta CEO Mark Zuckerberg outlined his vision for the company’s valuable advertising service. Powered by artificial intelligence.
“In the next few years, AI will also be able to generate creative for advertisers and personalize it for people to see,” he said.
But as the trillion-dollar company seeks to revolutionize advertising tech,, Meta’s use of AI may already be causing the company some trouble.
On Thursday, a bipartisan group of lawmakers led by Republican Rep. Tim Walberg of Michigan and Democrat Rep. Kathy Kastl of Florida sent a letter to Zuckerberg demanding that he answer questions about Meta’s advertising service.
The letter follows a Wall Street Journal report in March that revealed federal prosecutors were investigating the company for its involvement in illegal drug sales on its platform.
“Meta appears to be shirking its social responsibilities and continuing to ignore its community guidelines,” the letter said. “Protecting online users, especially children and teens, is one of our top priorities. We remain concerned that Meta has failed to meet that mandate, and this dereliction of duty must be addressed.”
Zuckerberg has already faced senators grilling him about the safety of kids on the Meta social media site, and during a Senate hearing he stood up and apologized to families who feel their children have been harmed by social media use.
In July, the nonprofit watchdog Tech Transparency Project reported that meth continues to profit from hundreds of ads promoting the sale of illegal and recreational marijuana. Drugs including cocaine and opioids are prohibited under meth policy. advertisement.
“Many of the ads make no secret of their intent, showing pictures of prescription bottles, piles of pills or powder, or chunks of cocaine and urging users to place an order,” the watchdog wrote.
“Our systems are designed to proactively detect and police violating content, and we have rejected hundreds of thousands of ads that violate our drug policies,” a Mehta spokesperson told Business Insider, reiterating a statement provided to The Wall Street Journal: “We will continue to devote resources to further policing this type of content. Our hearts go out to those who are suffering the tragic consequences of this epidemic. It will take all of us working together to stop it.”
The spokesperson declined to discuss how Meta uses AI to manage ads.
Ads poke holes in Meta’s AI system
The exact process by which Meta approves and moderates ads is not publicly available.
What we do know is that the company is using artificial intelligence in part to moderate content, The Wall Street Journal reported, adding that using photos to display drugs could allow some ads to slip through Meta’s moderation system.
Here’s what Meta revealed about its “ad review system.”
“Our ad review system relies primarily on automated technology to apply our ad standards to millions of ads that run on our Meta technology. However, we also use human reviewers to improve and train our automated systems and, in some cases, may manually review ads.”
The company also said it continues to work on further automating the review process to reduce reliance on humans.
But the revelation of meth ads promoting drugs on the platform showed that despite Zuckerberg’s promises of improved targeting and portrayal of sophisticated ad services as creating content, content that violates policies can still slip through the automated systems. For advertisers with generative AI.
Difficulties in deploying Meta’s AI
Meta has experienced difficulties rolling out its AI-powered services outside of advertising technology.
Less than a year after Meta introduced its celebrity AI assistants, the company discontinued the products to focus on enabling users to create their own AI bots.
Meta is also continuing to iron out glitches with its chatbot and AI assistant, Meta AI, which has been known to give hallucinatory answers and, in the case of BI’s Rob Price, to act like a user and give out his phone number to others.
Not just Meta, but the technical and ethical issues prevalent in AI products are a concern for many major U.S. companies.
A survey by Arize AI, a research firm that conducts research on AI technology, revealed that 56% of Fortune 500 companies consider AI a “risk factor,” the Financial Times reported.
Broken down by industry, 86% of technology companies, including Salesforce, said they believe AI poses a business risk, according to the report.
But these concerns come at a time when tech companies are pushing AI into every corner of their products, even as the path to monetization remains unclear.
“The development and deployment of AI involves significant risks,” Mehta said in its 2023 annual report. “There can be no assurance that the use of AI will improve our products or services or be beneficial to our business, including its efficiency or profitability.”