There are many tools to filter and block spam calls. But what about the calls you actually answer?
This is the problem Microsoft is trying to solve with a new service called Azure Operator Call Protection. The service can analyze conversations in real time and alert users if the caller seems suspicious. Microsoft is piloting the program with his BT Group and is demonstrating how the technology works at the Mobile World Congress conference in Barcelona.
read more: What to expect at Mobile World Congress 2024
The announcement comes as spam calls remain a persistent problem. In a study that analyzed 98 billion calls worldwide, voice security platform Hiya reported that the average phone user receives about 14 spam calls each month. The Federal Communications Commission recently cracked down on robocalls, deeming fraudulent calls using AI-generated voices illegal.

Look at this: CNET’s professional photographers react to AI photography
Azure Operator Call Protection is a service provided by Microsoft as an option for mobile carriers to offer to their subscribers. Uses AI to monitor during conversations for signals that a call may be fraudulent. Those metrics could include language encouraging recipients to provide sensitive information over the phone, said Sean Hakl, vice president of 5G strategy for Microsoft’s Azure for Operators program.
“The good news is that this also just reinforces best practices that people often overlook,” Hakl said.
Hiya said the most common scams include fake phone calls impersonating Amazon, insurance companies, credit card companies, and scammers trying to trick users into providing Medicare information. Hakl also said that AI models will evolve over time as new types of threats emerge.

Infographic showing how Microsoft’s Azure Operator Call Protection service works.
The current version of the tool interrupts the call and alerts the user if it appears to be a potential scam. From there, users can choose to end the call or ask for more information about why the call was flagged. In other words, if the system determines that a call may be fraudulent, it doesn’t just end the call. The user has to make that decision.
“We will provide as much information as we can extract and allow the agency to make a choice,” Hakl said.
The service is opt-in, requires user consent, and no call data is stored or used to train Microsoft’s AI models.
“When the call ends and the customer chooses to listen to recommendations or make additional inquiries, the call ends,” Hakl said.
read more: Best Galaxy AI features on Galaxy S24
Microsoft is working with BT Group to test the technology, but has not said when it will be brought to market. Microsoft’s efforts are part of a broader movement to combat phone fraud. For example, AT&T recently added a logo to legitimate corporate calls to help users spot spam. In November, the White House also announced a virtual hackathon focused on building AI technology to spot unwanted spam and robocalls.
In the future, this feature may extend beyond voice calls. “We evaluate audio first,” Hakl said. “But we’re obviously interested in the textual side as well.”
Editor’s note: CNET uses an AI engine to create some stories. See this post for more information.