The edge of a network, as you may know, is the furthest extent of its reach. A cloud platform is a kind of network overlay that makes multiple network locations part of a single network domain. It should therefore stand to reason that an edge cloud is a single addressable, logical network at the furthest extent of a physical network. And an edge cloud on a global scale should be a way to make multiple, remote data centers accessible as a single pool of resources -- of processors, storage, and bandwidth. The combination of 5G and edge computing will unleash new capabilities from real-time analytics to automation to self-driving cars and trucks.
At the edge of any network, there are opportunities for positioning servers, processors, and data storage arrays as close as possible to those who can make best use of them. Where you can reduce the distance, the speed of electrons being essentially constant, you minimize latency. A network designed to be used at the edge leverages this minimal distance to expedite service and generate value. Depending on the application, when either or both edge strategies are employed, these servers may actually end up on one end of the network or the other. Because the Internet isn't built like the old telephone network, "closer" in terms of routing expediency is not necessarily closer in geographical distance. And depending upon how many different types of service providers your organization has contracted with -- public cloud applications providers (SaaS), apps platform providers (PaaS), leased infrastructure providers (IaaS), content delivery networks -- there may be multiple tracts of IT real estate vying to be "the edge" at any one time.
Video: 3 things you should know about cloud v. data center At present, edge computing is more of a prospect than a mature market -- more of a concept than a product. It is an effort to bring quality of service (QoS) back into the data center services discussion, as enterprises decide not just who will provide their services, but also where. "The edge" is a theoretical space where a data center resource may be accessed in the minimum amount of time. You might think the obvious place for the edge to be located, for any given organization, is within its own data center ("on-premises"). Or, if you've followed the history of personal computing from the beginning, you might think it should be on your desktop, or wherever you've parked your PC.
It took 37 days for the Norwegian company contracted to build fiber optic connectivity for Brazil just to carry the optical cable for the project from the Atlantic Ocean, down the Amazon River, to the city of Manaus. Connectivity for this city's two million-plus inhabitants has never been a certainty. For 2014, cloud services analyst CloudHarmony ranked South America as the continent with the greatest service latency by far of any region in the world; and the following year, it rated Brazil as the world's least reliable region for Amazon AWS service. By now, the people of Manaus are tired of the "Amazon" puns. Meanwhile, they consume fast food just like the rest of the world.
Calling a technological domain "the edge" gives it a cool sound, like it's just pushing the boundaries of some innovative envelope. So naturally, there are multiple subdomains of the world's wireless network that operators and equipment providers have staked out as "the edge." There is a "network edge" that you'd think would extend to the furthest boundaries of its coverage areas. Actually the "network edge" can be inches away from the wireless core, if the functions being served there extend directly to the customer. An edge-ready mini data center as envisioned by cabling solutions provider Datwyler.