Americas

  • United States

Why edge computing is both hyped and ignored

Opinion
Mar 14, 20247 mins
Data CenterEdge Computing

As a subset of distributed computing, edge computing isn’t new, but it exposes an opportunity to distribute latency-sensitive application resources more optimally.

spot edgecomputing nww intro by fit ztudio shutterstock 2400x1600 primary 1
Credit: Fit Studio / Shutterstock

Every single tech development these days is either mercilessly hyped or unfairly ignored, it seems. Edge computing is surely a tech development, so we should be able to quickly determine which of these two conditions applies, and let everyone go back to their coffee, right? No, because the truth is yet another possibility, which is that both conditions apply. We’re hyping the edge while ignoring its reality.

When we started to talk about edge computing, the definition was straightforward. There are applications that demand a low latency, meaning a very small interval of time between an event that needs to be processed and the result of that processing. So small, in fact, that we needed special network services (remember low latency was a 5G claim), and yet so small that even those network improvements weren’t enough. We needed to move computing closer to the user, meaning closer to the point where the events were generated and the results were delivered. The edge. The edge was the new cloud, the driver of new network services. Most recently, it became a requirement for AI. The gift that kept on giving, no matter where you were in tech.

Not only that, the edge is real. You can identify whole verticals, like manufacturing, warehousing, transportation, utilities, telecommunications, and even government, where edge applications are already in place. We have companies and industries that are utterly dependent on their edge computing. So many, in fact, that by this point you’re probably wondering why I say that edge computing is a hype wave, or whether the hype, in the case of the edge, was justified. Well, one good reason is that we’ve had this all for decades. If the edge is real, it’s old. How did we make it new again?

Remember distributed computing? Minicomputers and personal computers? For decades, we’ve built computers that were designed to be distributed, meaning put in places outside the data center. I’m typing on one now, and you’re reading this on another. Smartphones and smart watches are forms of distributed computing. So are the industrial controllers that have been running manufacturing and other applications for decades. When you go shopping at a big store, you’re using a distributed computing application when you check out, and you may be getting your shopping money from an ATM via distributed computing, too.

Distributed computing was based on a simple truth, which was that if you have a concentration of activity that depends on computer access, having the computer that supports the activity locked in a distant data center invites major disruptions. If you have a critical piece of gear in your home, you don’t want to run a couple hundred feet of extension cord to plug it into a neighbor’s outlet. And yes, this whole distributed thing went off the rails as departments started buying their systems to get around central IT development and deployment delays. And yes, cloud computing got its start with “server consolidation” to bring some of these distributed systems back under central control. And all of that is how we got the solid, sensible, notion of the edge off into the land of hype.

It’s simple logic. Edge computing is a form of distributed computing, and distributed computing was an early driver of the cloud. Thus, edge computing must be a driver of the cloud, and just like those distributed servers, edge computers should be replaced by the cloud. The applications we’re currently running on premises, close to the activities they support, should be turned into cloud applications. Since these applications are highly latency-sensitive, that means we need to move cloud hosting points to the very edge of the network so we can get to them with minimal delay. The edge is the past, the present, the future.

But wait. Does that mean your smartphone will be replaced by the cloud? How about your PC? We’ve had virtual PCs for several decades now, but in the almost-three-thousand user conversations I’ve had over the last year, I never had a single one where the user was running a virtual PC instead of a real one. People want more stuff running in their phones, not less. In all those latency-sensitive verticals I’ve assessed, in all those conversations, across over four hundred enterprises, how many have replaced their real-time, industrial-strength, on-premises applications with cloud applications? None.

Sounds like an impasse, but it’s actually the beginning of an insight. Forget current hype and go back to distributed computing. Edge computing is a subset of it. What we’re missing is that it’s going to stay that way, remain a subset. The future of computing isn’t the cloud, or the edge, or the data center. We’ve spent the last fifty years evolving toward a distributed future, so long that we didn’t interpret the motion correctly. Which means we haven’t interpreted what’s needed correctly either.

Applications are already threading work across multiple compute points. If we’re going to build more real-time applications, the kind that are latency sensitive, we have to accept that we’re not going to pull every piece of a multi-hop workflow into one place at the edge of the cloud. We need the ability to distribute work optimally as much as we need places to distribute to. Edge computing adds a dimension to that need to distribute optimally, the dimension of latency management, both connection latency and process latency. Where the edge is, how far in or out it is, doesn’t matter because we know we can put resources wherever they’re needed. What we can’t do is put the application pieces where they have to go.

Distributable applications have to be designed that way, built to take advantage of middleware features and tools that do the optimization, and able to re-optimize as needed when conditions change. Or re-optimize when you start to consider the larger problem, which is that the sum of distributed parts has to create a glorious whole, both at the application level and at the business-and-society level.

We have smart buildings today, and we talk about smart cities, but what is a smart city? Maybe it’s a collection of smart buildings with another layer of smartness added. Maybe the future of edge computing isn’t created by moving today’s edge hosting points into the cloud but by adding a higher-level hosting point. An assembly line doesn’t assemble in a vacuum, it’s part of a company that includes transporting parts and delivering products, with all the paperwork involved in both. It’s an integrated entity, except that we don’t really think of it that way, and all those pieces of company operation aren’t fully integrated either. I asked over three hundred enterprises if they had a fully integrated application set running their business. None said they did, and that exposes the real issue with the edge.

We don’t need to come to grips with the edge. The edge isn’t even the real issue. It’s not the next generation of the cloud, or the first generation of IoT, or the new model of the data center. It’s a piece of something we’ve been working with for fifty years, a piece of distributed computing.

What we do need to come to grips with is that you have to distribute computing without dividing it. A building, a company, a city, a country, are all collective entities, and each is made up of smaller pieces, distributed pieces. For fifty years, we’ve been trying to get distributed computing right. All this talk about the edge is just proof we’re not there yet, that we’re still focusing on parts and not the whole. Want a mission for the future? Creating the glorious whole is the right one.

tom_nolle

Tom Nolle is founder and principal analyst at Andover Intel, a unique consulting and analysis firm that looks at evolving technologies and applications first from the perspective of the buyer and the buyers’ needs. Tom is a programmer, software architect, and manager of large software and network products by background, and he has been providing consulting services and technology analysis for decades. He’s a regular author of articles on networking, software development, and cloud computing, as well as emerging technologies like IoT, AI, and the metaverse.

More from this author