- The AI boom is revitalizing edge computing, which moves data processing away from the cloud.
- The boom in edge computing has the potential to reduce the cost and environmental impact of deploying AI.
- This article is part of our series, “5G and the Connectivity Playbook,” which explores some of the most significant technological innovations of our time.
Artificial intelligence is leading us into the era of edge computing, two words you’ll likely hear more of in the coming months and years.
Tech giants have spent billions of dollars on the cloud, getting customers to move their data to remote servers, and now they’re expanding into edge computing, which refers to moving more computing closer to users—to the “edge” of the network.
With edge computing, more processing happens on or near the device — your smartphone, your self-driving car, your home security device — a change that could reduce latency, lower energy costs, and improve privacy and security by sending less sensitive information to servers farther away.
The concept of edge computing isn’t new, but with the AI gold rush and improvements in 5G, the time is right for the field to boom. 5G will enable devices at the edge to communicate with each other and with the cloud. 5G has had a slow and choppy start, but recent advances could be the catalyst for edge computing, which needs data connections to run seamlessly.
Amazon has been eyeing edge computing as a $1 billion business, Business Insider previously reported, and talk of AI and its potential for edge computing was also a hot topic at last month’s Mobile World Congress in Barcelona.
The accelerated move to edge computing is driven in part by the boom in AI, which requires huge amounts of data to process, Jim Poole, vice president of global business development at Equinix, said during a panel at MWC.
“Data gravity is a real thing,” Poole said. “At some point it becomes economically and physically impossible to send that data back somewhere else.”
The latency benefits of the edge are crucial in technologies like self-driving cars, which need to make split-second decisions and therefore have powerful computers in the vehicle itself, as well as medical equipment and devices used in hazardous manufacturing that need more computing to happen instantly.
AI accelerates edge computing
The industry is already seeing some edge benefits in smartphones and is developing better chips and software to put more AI processing power on devices, which will also force AI companies to launch smaller language models that can run on less powerful devices.
Taiwanese chipmaker MediaTek showed off a smartphone-like device in one of the most impressive demos at this year’s MWC. Run the Generative Image Maker It is equipped with a Stable Diffusion AI model that creates and edits photos in real time.
Lenovo, which was also at the show, is making a big enterprise push by selling “edge AI” servers to businesses and has plans to move into the consumer edge space, Tom Butler, executive director of Lenovo’s laptop product line, told BI.
“When you think about bringing generative workloads to the device, first of all, you solve the time, security and privacy issues, because you’re not pushing it up to the cloud and back,” he said.
Edge computing can save energy
Shifting AI closer to users could provide cost benefits for tech companies like OpenAI, Google, and Amazon, who run AI models on their own servers at great expense.
Edge computing can also have environmental benefits: The data centers that run AI in the cloud use vast amounts of water and energy.
Jillian Kaplan, Dell’s global head of 5G, said during an MWC panel that edge computing will result in “huge energy savings.”
“I think the theme of sustainability has come and gone over the years,” Kaplan said.
“I don’t see it fluctuating again,” she added. “Where we are now, energy efficiency has to remain a top priority, and these edge and AI capabilities are going to help us keep our equipment extremely energy efficient as we have so much data coming in.”