Five years later, MNOs have barely deployed edge computing. Only a few operators have attempted to partner with search engine companies, sending data to these companies' cloud centers instead of using their own computing resources.
Instead, the mainstream model remains one where telecommunications networks transmit data to "peer points" (often called internet exchange points), where multiple internet service providers and networks connect to exchange traffic. From there, the data is transmitted to hyperscale-owned data centers for processing, and responses are returned via the same path.
Why hasn't the concept of edge computing become popular?
Edge computing promises to reduce latency by decreasing the distance data must travel for processing. However, fiber optic cables transmit data at approximately 200 km/ms. A data center 100 km away would only add 1 millisecond to the response time.
Current 5G network latency is typically 30-40 milliseconds, with the best private networks around 10 milliseconds. Data processing itself usually takes several milliseconds, especially when video compression is involved. Therefore, reducing response time by 1 millisecond by bringing computation closer to the device makes almost no sense.
Furthermore, in most regions, a large city may already have hyperscale data centers within a 1-millisecond range. These large facilities offer better economies of scale, and hyperscale enterprises are better positioned than MNOs to sell computing services. Therefore, today's "edge" solutions largely reflect the traditional model where MNOs route traffic to peers and then process it in the hyperscale enterprise's data center.
One exception is a private 5G network, which routes traffic to the IT systems of the private network owner. While this technically meets the criteria for edge computing, functionally it is simply another form of peer-to-peer, this time into the local IT network.
Recently, discussions about MEC have subsided because MNOs have realized that edge computing is neither a viable deployment service nor a compelling revenue opportunity. In fact, MNOs are increasingly centralizing their computing needs, integrating baseband processing from multiple base stations into a centralized unit, rather than deploying computing resources from the network edge.
The future of edge computing: What will 6G change?
Currently, what 6G will look like remains uncertain. MNOs advocate for "software-only" updates to reduce operating costs, while manufacturers are promoting "beyond 5G" with faster speeds and lower latency. Concepts such as sensing and AI-native capabilities are also under discussion, but whether new spectrum will be allocated remains unclear.
For edge computing to develop, several conditions need to be met:
New applications requiring latency of less than 5 milliseconds
Willing to pay more for this ultra-low latency service
Additional spectrum allocation to support low-latency air interface
Sufficiently widespread 6G deployment makes edge computing feasible throughout the region.
At present, all of these seem unlikely to be achieved.
Most proposed 6G applications are merely reiterations of 5G promises, many of which have yet to be fulfilled. Consumers and businesses have little interest in paying extra for 5G services, and securing additional spectrum for 6G is becoming increasingly difficult. In fact, geographically, mid-band 5G (3.5 GHz) is deployed in only 20% of most countries, indicating that 6G coverage will be even more limited.
Will artificial intelligence and sensing technology be game-changers?
Two frequently discussed new applications for 6G are sensing and artificial intelligence. The market demand for sensing applications remains unclear, and implementing sensing in 6G may require high-frequency spectrum that is not well-suited for communication.
Artificial intelligence applications typically require fast response times, which justifies edge computing. However, most AI workloads either run directly on mobile devices (e.g., AI assistants, visual processing) or require high-performance processing in large data centers. Few AI applications appear to require millisecond-level response times, and there is no strong market demand to pay an extra 1 millisecond for acceleration through edge computing.
Some industry leaders have suggested that MNOs sell their "idle" computing resources for AI workloads, utilizing unused capacity in their network baseband processing, but this idea is flawed. Hyperscale data centers also have excess capacity during off-peak hours, rendering MNO computing resources unnecessary. Furthermore, the complexity of dynamically reallocating computing power between AI workloads and radio network functions is significant.
Therefore, transferring AI workloads to MNOs offers little added value and is unlikely to be successful.
The edge still belongs to the cloud
The telecommunications industry has long attempted to compete with hyperscale providers by offering value-added services, but has often failed. MNOs have been striving to gain a foothold in areas dominated by hyperscale and OTT providers, and edge computing is no exception.
In recent years, it has been shown that the true “edge” will continue to exist in hyperscale data centers located in major urban hubs, rather than at the network edge.