Apple, Microsoft and Google are ushering in a new era of what they say are artificial intelligence-enabled smartphones and computers that will automate tasks like editing photos or celebrating a friend’s birthday.
But to make that happen, these companies need something from you: more data.
In this new paradigm, your Windows computer will take screenshots of everything you do every few seconds, your iPhone will stitch together information from the many apps you use, and your Android phone will listen in on your phone calls in real time to alert you to fraud.
Would you mind sharing this information?
This shift has significant implications for our privacy. To deliver new, tailored services, companies and their devices need persistent and intimate access to our data more than ever before. Until now, the way we use apps and view files and photos on our phones and computers has been fairly siloed. AI needs a holistic view to connect the dots of what we do across apps, websites and communications, security experts say.
“Do I feel like it’s safe to give this company my information?” Cliff Steinhauer, director of the National Cyber Security Alliance, a nonprofit that focuses on cybersecurity, asked of companies’ AI strategies.
All of this is happening because OpenAI’s ChatGPT transformed the tech industry nearly two years ago. Companies like Apple, Google, and Microsoft have since overhauled their product strategies and invested billions of dollars in new services under the umbrella term AI, convinced that this new type of computing interface that constantly learns from your behavior and offers you assistance will become essential.
Experts say the biggest potential security risk from the change comes from subtle changes to the way the new devices work: AI can automate complex tasks like erasing unwanted objects from a photo, which can require more computing power than a phone can handle. That means more of your personal data could leave your phone and be processed elsewhere.
The information is sent to the so-called cloud, a network of servers that process requests. Once the information reaches the cloud, it can potentially be seen by others: company employees, malicious actors, government agencies, etc. And while some of our data is always stored in the cloud, our most personal and intimate data that was once only available to us – our photos, messages, emails – can now be connected to corporate servers and analyzed.
Technology companies say they have gone to great lengths to protect people’s data.
For now, it’s important to understand what happens to our information when we use AI tools, so I got more information from each company about how they handle our data and interviewed security experts. I plan to wait and see if the technology works well enough before deciding whether it’s worth sharing my data.
Here’s what you need to know:
Apple Intelligence
Apple recently announced Apple Intelligence, a suite of AI services and its first major entry into the AI race.
The new AI services will be built into the fastest iPhones, iPads and Macs starting this fall. Users will be able to use them to automatically remove unwanted objects from photos, create summaries of web articles and write replies to text messages and emails. Apple is also overhauling its voice assistant, Siri, to make it more conversational and access data across apps.
At an Apple conference this month, Craig Federighi, the company’s senior vice president of software engineering, explained how Apple Intelligence works: Mr. Federighi received an email from a colleague asking him to postpone a meeting, but that night he was planning to see a play his daughter was starring in. His phone pulled up his calendar, a document with details about the play, and a maps app that predicted whether he’d be late if he agreed to the meeting later, and Mr. Federighi asked to postpone the meeting.
Apple said it strives to process most AI data directly on its own phones and computers, ensuring that no other company, including Apple, has access to the information. But for tasks that need to be pushed to its servers, Apple said it has safeguards in place, such as encrypting and scrambling the data and quickly deleting it.
Apple said it had also taken steps to block employees from accessing the data, and would hire security researchers to audit its technology to make sure it worked as promised.
But Apple hasn’t made it clear which new Siri requests may be sent to its servers, said Matthew Green, a security researcher and associate professor of computer science at Johns Hopkins University who was briefed by Apple on the new technology. Anything that leaves the device is inherently less secure, he said.
Microsoft’s AI laptop
Microsoft is bringing AI to old laptops.
Last week, the company began selling a Windows computer called the Copilot+ PC, with prices starting at $1,000. The computer is equipped with a new type of chip and other devices that Microsoft says keep users’ data private and secure. The PC also has new AI-powered features, such as image generation and document rewriting.
The company also unveiled a new system called “Recall” that will help users quickly find documents and files they’ve worked on, emails they’ve read and websites they’ve visited. Microsoft likens Recall to having a photographic memory built into your PC.
To use it, you type in a casual phrase, like, “I remember I had a video call with Joe recently and he had an ‘I Love New York’ coffee mug,” and your computer will pull up a recording of the video call with those details.
To achieve this, Recall takes screenshots of what you’re doing on your machine every five seconds and compiles those images into a searchable database. The snapshots are stored and analyzed directly on your PC, so the data isn’t reviewed by Microsoft or used to improve the AI, the company said.
Still, security researchers have warned of potential risks, explaining that if data was hacked, everything a user types or reads could easily be made public. This has led Microsoft, which was due to roll out Recall last week, to indefinitely postpone the release.
The computers come with Microsoft’s new operating system, Windows 11, which David Weston, a company executive who oversees security, said has multiple layers of security.
Google AI
Google also announced a series of AI services last month.
One of the biggest announcements was a new AI-powered phone fraud detection feature that listens to phone calls in real time and notifies you if it thinks a caller might be a scammer (for example, if they ask for your bank PIN). Google says the fraud detection feature operates entirely on the phone and must be initiated by the user, which means Google won’t be listening in on your calls.
Google also announced another feature called “Ask Photos,” which requires sending information to the company’s servers. Users can ask questions like, “When did your daughter learn to swim?” and be shown the first photo of her child swimming.
Google said that in rare cases, employees may review Ask Photos conversations and photo data to address abuse or harm, and that the information may also help improve the Photos app. In other words, your question and photo of your child swimming could help other parents find images of their own children swimming.
Google said its cloud is locked down with encryption, protocols and other security techniques to limit employee access to data.
“Our privacy approach applies to our AI capabilities whether they’re running on-device or in the cloud,” Suzanne Frey, a Google executive who oversees trust and privacy, said in a statement.
But security researcher Green said he found Google’s approach to AI privacy relatively opaque.
“I don’t like the idea of my very personal photos and searches being sent to a cloud over which I have no control,” he said.