BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

The Cloud-Era Of Computing Is Just About Over, So What's Next?

Forbes Technology Council
POST WRITTEN BY
Mark Lewis

Most technology startups founded in the past decade are either building software for the cloud or running their applications in the cloud. It has become so trendy that many venture investors will simply not invest in any “tech” that is not “cloud-based.” With the cloud as popular as ever, it may seem like heresy to say that it is all about to change, but that is exactly what happens with technology and innovation. As soon as you start to get comfortable, everything changes.

I believe the next era of computing will transition the focus to what is known as “edge computing” which, in many ways, is the anti-cloud. Yes, this sounds more like a cycling of fashion trends where we alternate between wide ties and narrow ties for no apparent reason but that is not the case. Specific requirements are driving this need for change.

Here is how to think about it. Today, most applications deliver content to people. Video, music, Alexa, Siri and all manner of applications rely almost exclusively on the cloud for their compute and data storage needs. This centralization (to massive clouds) is very economical for the producer. Network bandwidth is not much of an issue anymore. While there is a slight delay for any cloud interaction, for the most part, that half-second delay is within our tolerance of being “real-time.”

The focus of many next-gen applications, however, is not about interacting with people at all; it focuses on machine-to-machine interaction. Concepts like internet of things (IoT), machine learning and artificial intelligence (AI) all involve the gathering and processing of incredible amounts of data. Most of this sensor data is not generated in the cloud, it is created at the edge. The challenge for engineers fundamentally comes down to dealing with the one “speed” in technology that does not follow Moore’s Law. The speed of light. Simply put, the time it takes for a signal to travel from point A to point B is the exact same as it was 100 years ago and it is unlikely to ever change.

The term we use in computing is latency: the period of time it takes for a certain action to occur. In computing terms, latency is often a more important speed factor than bandwidth (which is a measure of how much data can pass through a given circuit). A simple analogy for latency and bandwidth is an assembly line. Think of latency as the amount of time it takes to build a widget and bandwidth as the total number of individual assembly lines. In this case, if you could half the time it takes to build the widget, you could meet production needs with half the number of lines.

In an IoT transaction, most of the transaction time is not consumed by the processing of the data, it is consumed by the latency (time) it takes for the data to move to/from the cloud. The only way to reduce this transaction time is to place the transaction engine closer to the device. Edge computing places “mini-datacenters” locally much the same as wireless operators place individual cell towers. The transaction between an end-point (sensor, mobile device or other system) and the primary system (which could be something like an AI inference engine) would occur at the local level. This system would improve transactional performance in most cases by at least a factor of 10. For aggregation purposes, the data would likely still be forwarded to cloud-based systems, but the performance need is not critical as the transaction is already complete.

As more and more sensors and machine automation such as self-driving cars proliferate, these new devices will feed off of each other’s data. The problem is that this sharing will often need to happen quickly -- very quickly -- to get the best results. Imagine two planes needing information on each other’s location or the speed at which two driverless cars would need data to avoid a collision. There is simply not enough time to send data to the cloud and back. As most coordination across this new class of machines will be “localized” the most profound performance improvements can be achieved by reducing the transaction latency (the time it takes for the transaction to happen).

Ultra-low latency cannot be achieved by using our present cloud-based systems. The speed of light becomes the limiting factor. A localized transaction might have a delay of one to 10 milliseconds but a cloud transaction would typically have a delay of 100 milliseconds or more. While this difference is not even detectable in term of human/computer interaction, it is an eternity in a machine to machine interaction. The solution will be to add edge computing and data storage systems capable of providing ultra-low transaction latency. These “edge-clouds” will provide the backbone for a new generation of machine-to-machine transactions enabling a whole new suite of capabilities.

Just like mainframes and PCs, cloud computing has its place and will by no means disappear. Edge computing is not about replacing what works today, it is about enabling the next wave of computing. The addition of edge computing capabilities will become the foundation that makes technologies like AI and IoT even more disruptive.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?